Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Often, a web application will use external pages. For example, within digital marketing campaigns, it’s common to use landing pages that are provided by a third-party service.
Since these pages are generally hosted by the service provider, they are not directly protected by Reblaze. However, a customer can still use Reblaze to scrub traffic coming from them, and block hostile visitors from affecting the parent site (i.e., the site/application that is using the third-party service).
For example, bots can stuff false information into landing-page forms and submit it to the parent site. Reblaze cannot prevent the bots from accessing the third-party landing page. However, it can interdict the form submissions, and prevent them from passing through to the parent site.
Most third-party services allow code to be embedded into their pages. The following process takes advantage of this.
Add the following code to the page (ideally and if possible, in the page header):
Explanation:
“www.example.com” is your site.
“pixel-path” is a non-existent path. In other words, the overall address won’t actually resolve to a page or resource within your site.
The purpose of this request is not to return a resource or page; it is merely to trigger a call into the parent site. The call will trigger an active challenge.
If the visitor is a bot, the challenge will fail. Subsequent access attempts to the parent site (e.g., form submission attempts by a bot) will be blocked.
If the visitor is a human, the web browser will receive Reblaze’s authenticating cookies. Subsequent actions by the visitor will include the cookies, and will be accepted by Reblaze.
Note that Reblaze needs to be configured so that active challenges occur on the pixel-path
URL given above.
Sometimes a customer wishes to "attack" an application or server as part of a loadtest. Under normal circumstances, Reblaze would prevent this, because it would enforce rate limits and block the excessive requests from reaching the upstream network.
The test can be accomplished by creating an ACL Policy which allows the source IP (or range of IPs), then naming that ACL Policy with a suffix of "OC." (For example, the ACL Policy might be named "Loadtest OC.")
As discussed in the Special ACLs section, the "OC" suffix means that the IP will not be subjected to normal rate limit testing. This allows that IP to generate a large amount of traffic, which will be passed through upstream without being filtered by Reblaze.
as administered in Settings -> Web Proxy -> Application Profiles -> Cache Modes
Web clients can cache resources from a server. Afterwards, a client can access its local cache, which reduces the number of requests sent to the server.
Servers can instruct clients to implement caching in certain ways. Servers can also set separate caching policies for any intermediary proxies in-between the server and the client.
Reblaze is a proxy between the clients and the origin (the upstream server). When the origin responds to clients, the outgoing responses pass through Reblaze. You can instruct Reblaze to preserve or alter the caching instructions in those responses.
On the Application Profiles page, the Cache Operation Mode is where you define Reblaze's caching behavior. There are several aspects to this:
Whether Reblaze includes caching instructions in the response to the client.
If so, whether they are the instructions from the origin server, or if Reblaze should override them and send different instructions instead.
Whether Reblaze itself caches the response content.
Here are the Cache Operation Modes and their effects.
Blocking requests that do not conform to content policies
There are several ways to filter requests based on their content.
are a powerful method for specifying content restrictions for traffic. They are included within , which are used within , which are assigned to various locations of your site/application at .
A more direct method is to . This too provides powerful filtering capabilities. Here's a comparison between this and Custom Signatures.
Both can deny requests based on their content.
Location-based filtering makes it easier to require certain content in incoming requests.
Location-based filtering is simpler when setting up different filters for different locations.
Custom Signatures are modular, and once defined, they can be re-used in multiple places throughout the interface. A location-based filter definition cannot do this. Instead, you have to manually define the filtering conditions for each location.
can be used to rate-limit a requestor, based on the content that is requested.
examines the characters found in arguments. Depending on its mode, it can block requests if unexpected characters are found, or pass them on to the WAF for further inspection. It can also act as an inverse content filter; those requests with arguments which contain only whitelisted characters can bypass WAF filtering.
Cache Operation Mode
Are caching instructions sent to the client?
Are the instructions the same as the origin's?
Does Reblaze cache response content?
Comments
Honor Origin
Yes
Yes
Yes, if the origin says to do so.
Reblaze will comply with the origin, and pass along its caching instructions to the client.
Active Pipe
Yes
No
Complies with TTL settings
Reblaze will generate caching instructions for the client in accordance with the Client TTL and CDN TTL settings in the Application Profiles section, and will also comply with them itself.
Passive Pipe
Yes
No
No
Reblaze will generate caching instructions for the client in accordance with the Client TTL and CDN TTL settings in the Application Profiles section, but will not store anything itself.
Neutral Pipe
Yes
Yes
No
Reblaze will pass the origin's caching instructions to the client, without caching anything itself.
Reset Headers
No
n/a
No
Reblaze will remove all cache headers sent by the origin, and will send the response to the client without any cache directive.
No Cache
Yes
No
No
Reblaze will send no-cache instructions to the client: Cache-Control "max-age=0, no-cache, no-store"
andPragma no-cache.
Private
Yes
No
Complies with TTL settings
Reblaze will generate caching instructions for the client in accordance with the Client TTL and CDN TTL settings in the Application Profiles section, and will also comply with them itself. In addition, the client's instructions will include Cache-Control private
The Quarantine section shows a list of traffic sources (i.e., sources of incoming requests) that are currently banned, blacklisted, and whitelisted.
A traffic source is banned automatically when it violates a Dynamic Rule. You cannot manually ban a requestor.
However, you can accomplish the same effect by blacklisting the requestor. Instructions are here.
You can manually remove a requestor from the Banlist or Blacklist. Instructions are here.
Whitelisting a traffic source will make it exempt from Dynamic Rules. Instructions are here.
A task-based FAQ
This section answers questions that often arise about using Reblaze.
Restricting consumption of resources and rate of requests
Different types of rate limits are defined in different parts of the Reblaze interface.
Global: The settings apply to your entire planet.
By location: Rate limits for specific locations/URLs can be created by defining the locations within , and selecting "More" at the end of each location's entry in the list. See full explanation here: .
By traffic source: Requestors who are submitting excessive requests can be banned for configured lengths of time. This can be done via .
Creating exemptions from rate limits is done differently, depending on the scope of the rate limits being addressed.
Global: Create an .
By location: Create an with the name "Rate Limit Whitelist". This can exempt any combination of IP, Country, and ASN. The Policy should then be included in a , and the Profile should be to the appropriate location(s) or portions of your site/application. Example:
This video will explain how to manage your SSL Certificates in Reblaze. For more information, see in the section of this manual.
By traffic source: A traffic source can be exempted from Dynamic Rule filtering either by , or by adding the traffic source to the .
Reblaze integrates with a wide range of SIEM [Security Information and Events Management] and SOC [Security Operation Center] solutions. Nearly 80% of our enterprise clients stream Reblaze events to their SoC, such as ArcSight, RSA, IBM, and Splunk.
Reblaze sends logs over TLS using the Syslog protocol.
The variety of available integrations makes it impossible to describe the process for each of them here. Our team will assist you with the integration, to ensure your platforms get the relevant information as soon as it becomes available.
To make the connection, Reblaze requires the following:
Destination IP/FQDN
Destination port
Destination's public SSL certificate in PEM format. (To prevent a MITM vulnerability, Reblaze performs SSL pinning.)
Please have this information available when you contact us to begin the integration.
Raw logs are sent in the format described here.
Sometimes ACL Policies can make it difficult to find and block a specific attacker.
For example, let's say an ACL Policy (named "Allow US Traffic") for an ecommerce store allows all traffic originating from the United States. But then an attacker, using an IP within the US, begins to scrape prices and other data from the store.
If an ACL Policy were added to Deny that IP address, it wouldn't work. The hierarchy of ACL Policy enforcement means that the "Allow US Traffic" Policy will take precedence, and the 'Deny' Policy for that IP will never be invoked.
One way to quickly solve this problem is to add an ACL Policy with a name ending in a suffix of "XDeny" (for example, "Block US Scraper XDeny"). As was discussed in the "Special ACLs" section, that suffix moves the ACL Policy to the top of the hierarchy.
Therefore, that ACL Policy will be invoked and enforced for that IP address, and the attacking IP will be blocked, before the "Allow US Traffic" Policy has a chance to grant it access.
Long-term use of this capability is not optimal; using it too frequently can tend to create a collection of ACL policies that is messy, confusing, and difficult to manage. The optimal approach is to use it as a tool for solving a specific problem temporarily, while a robust set of ACL Policies are constructed and tested that will solve the problem correctly.