Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Web clients can cache resources from a server. Afterwards, a client can access its local cache, which reduces the number of requests sent to the server.
Servers can instruct clients to implement caching in certain ways. Servers can also set separate caching policies for any intermediary proxies in-between the server and the client.
Reblaze is a proxy between the clients and the backend. When the backend responds to clients, the outgoing responses pass through Reblaze. You can instruct Reblaze to preserve or alter the caching instructions in those responses.
This can be done through Cloud Functions. A future update will provide multiple cache policy templates. Meanwhile, if you need immediate assistance in customizing Reblaze's caching behavior, contact support.
Sometimes a customer wishes to "attack" an application or server as part of a loadtest. Under normal circumstances, Reblaze would prevent this, because it would enforce rate limits and block the excessive requests from reaching the upstream network.
The test can be accomplished by creating an ACL Policy which allows the source IP (or range of IPs), then naming that ACL Policy with a suffix of "OC." (For example, the ACL Policy might be named "Loadtest OC.")
As discussed in the Special ACLs section, the "OC" suffix means that the IP will not be subjected to normal rate limit testing. This allows that IP to generate a large amount of traffic, which will be passed through upstream without being filtered by Reblaze.
A task-based FAQ
This section answers questions that often arise about using Reblaze.
Sometimes ACL Policies can make it difficult to find and block a specific attacker.
For example, let's say an ACL Policy (named "Allow US Traffic") for an ecommerce store allows all traffic originating from the United States. But then an attacker, using an IP within the US, begins to scrape prices and other data from the store.
If an ACL Policy were added to Deny that IP address, it wouldn't work. The hierarchy of ACL Policy enforcement means that the "Allow US Traffic" Policy will take precedence, and the 'Deny' Policy for that IP will never be invoked.
One way to quickly solve this problem is to add an ACL Policy with a name ending in a suffix of "XDeny" (for example, "Block US Scraper XDeny"). As was discussed in the "Special ACLs" section, that suffix moves the ACL Policy to the top of the hierarchy.
Therefore, that ACL Policy will be invoked and enforced for that IP address, and the attacking IP will be blocked, before the "Allow US Traffic" Policy has a chance to grant it access.
Long-term use of this capability is not optimal; using it too frequently can tend to create a collection of ACL policies that is messy, confusing, and difficult to manage. The optimal approach is to use it as a tool for solving a specific problem temporarily, while a robust set of ACL Policies are constructed and tested that will solve the problem correctly.
Blocking requests that do not conform to content policies
There are several ways to filter requests based on their content.
Create one or more lists of content filters in Session Profiling, with appropriate tags.
Create ACL Policies for each tag with appropriate actions (Bypass, Deny, or Allow).
Include the ACL Policies in one or more ACL Profiles.
Include the ACL Profiles in one or more Security Profiles.
Custom Signatures can be used for specifying content restrictions. They are included within ACL Policies, which are used within Profiles, which are assigned to various locations of your site/application on the Web Proxy page.
Args Analysis examines the characters found in arguments. Depending on its mode, it can block requests if unexpected characters are found, or pass them on to the WAF for further inspection. It can also act as an inverse content filter; those requests with arguments which contain only whitelisted characters can bypass WAF filtering.
Custom Signatures and Args Analysis will be deprecated in a future release. For content filtering, it is recommended that you use the first method based on Session Profiling instead.
The Quarantine section shows a list of traffic sources (i.e., sources of incoming requests) that are currently banned, blacklisted, and whitelisted.
A traffic source is banned automatically when it violates a Dynamic Rule. You cannot manually ban a requestor.
However, you can accomplish the same effect by blacklisting the requestor. Instructions are here.
You can manually remove a requestor from the Banlist or Blacklist. Instructions are here.
Whitelisting a traffic source will make it exempt from Dynamic Rules. Instructions are here.
Often, a web application will use external pages. For example, within digital marketing campaigns, it’s common to use landing pages that are provided by a third-party service.
Since these pages are generally hosted by the service provider, they are not directly protected by Reblaze. However, a customer can still use Reblaze to scrub traffic coming from them, and block hostile visitors from affecting the parent site (i.e., the site/application that is using the third-party service).
For example, bots can stuff false information into landing-page forms and submit it to the parent site. Reblaze cannot prevent the bots from accessing the third-party landing page. However, it can interdict the form submissions, and prevent them from passing through to the parent site.
Most third-party services allow code to be embedded into their pages. The following process takes advantage of this.
Add the following code to the page (ideally and if possible, in the page header):
Explanation:
“www.example.com” is your site.
“pixel-path” is a non-existent path. In other words, the overall address won’t actually resolve to a page or resource within your site.
The purpose of this request is not to return a resource or page; it is merely to trigger a call into the parent site. The call will trigger an active challenge.
If the visitor is a bot, the challenge will fail. Subsequent access attempts to the parent site (e.g., form submission attempts by a bot) will be blocked.
If the visitor is a human, the web browser will receive Reblaze’s authenticating cookies. Subsequent actions by the visitor will include the cookies, and will be accepted by Reblaze.
Note that Reblaze needs to be configured so that active challenges occur on the pixel-path
URL given above.
Reblaze integrates with a wide range of SIEM [Security Information and Events Management] and SOC [Security Operation Center] solutions. Nearly 80% of our enterprise clients stream Reblaze events to their SoC, such as ArcSight, RSA, IBM, and Splunk.
Reblaze sends logs over TLS using the Syslog protocol.
The variety of available integrations makes it impossible to describe the process for each of them here. Our team will assist you with the integration, to ensure your platforms get the relevant information as soon as it becomes available.
To make the connection, Reblaze requires the following:
Destination IP/FQDN
Destination port
Destination's public SSL certificate in PEM format. (To prevent a MITM vulnerability, Reblaze performs SSL pinning.)
Please have this information available when you contact us to begin the integration.
Raw logs are sent in the format described here.
Restricting consumption of resources and rate of requests
Different types of rate limits are defined in different parts of the Reblaze interface.
Static rate limits for each application: Timeouts and request limits per IP are set for each application on the Advanced Frontend Settings on the Web Proxy page.
By location: Rate limits for specific locations/URLs can be created in Rate Limiting Rules and then including them in Security Profiles.
By traffic source: Requestors who are submitting excessive requests across the planet can be banned for configured lengths of time. This can be done via Dynamic Rules.
In some situations, either Rate Limiting Rules or Dynamic Rules could be used. Rate Limiting Rules are the preferable option. They are more powerful, more flexible, and in a future release, they will fully replace Dynamic Rules.
Creating exemptions from rate limits is done differently, depending on the scope of the rate limits being addressed.
Global: Create an ACL Policy with an OC suffix.
By location: Create an ACL Policy with the name "Rate Limit Whitelist". This can exempt any combination of IP, Country, and ASN. The Policy should then be included in a Profile, and the Profile should be assigned to the appropriate location(s) or portions of your site/application. Example:
By traffic source: A traffic source can be exempted from Dynamic Rule filtering either by adding an Ignore parameter to the Rule itself, or by adding the traffic source to the Whitelist within the Quarantine section.