Signing HTTP Requests
Learn how to use Envoy & OPA to sign and validate HTTP Requests
Photo by Vanna Phon on Unsplash
Getting Started with Envoy & Open Policy Agent — 09 —
Learn how to use Envoy & OPA to sign and validate HTTP Requests
This is the 9th Envoy & Open Policy Agent Getting Started Guide. Each guide is intended to explore a single feature and walk through a simple implementation. Each guide builds on the concepts explored in the previous guide with the end goal of building a very powerful authorization service by the end of the series.
The source code for this getting started example is located on Github. ——> Envoy & OPA GS # 9
Here is a list of the Getting Started Guides that are currently available.
Getting Started Guides
- Using Envoy as a Front Proxy
- Adding Observability Tools
- Plugging Open Policy Agent into Envoy
- Using the Open Policy Agent CLI
- JWS Token Validation with OPA
- JWS Token Validation with Envoy
- Putting It All Together with Composite Authorization
- Configuring Envoy Logs Taps and Traces
- Sign / Verify HTTP Requests
Introduction
In this example we are going to show how we can use Envoy and Open Policy Agent to sign HTTP requests as we come through a front proxy and validate these signatures on a 2nd Envoy / Open Policy Agent instance that is dedicated to the endpoint that we are protecting.
The diagram below shows system and request flow that we will be building.
Signing and validating signatures on a request is very useful in large environments where a request may traverse a number of different application layers, proxies and other components that are owned by a variety of teams. Some of those components may even be controlled by 3rd parties.
A digital signature is a mathematical scheme for verifying the authenticity of digital messages or documents. A valid digital signature, where the prerequisites are satisfied, gives a recipient very strong reason to believe that the message was created by a known sender (authentication), and that the message was not altered in transit (integrity).[1]
Digital signatures are a standard element of most cryptographic protocol suites, and are commonly used for software distribution, financial transactions, contract management software, and in other cases where it is important to detect forgery or tampering.
Source: Wikipedia
Digital signatures can be a great tool to prevent internal fraud. In large corporations it is common to utilize centralized log aggregation tools. These tools are often open to all employees / workforce users that have valid credentials. This is a great practice for transparency, understanding dependencies, troubleshooting etc. However, sometimes:
- Novice engineers or engineers that are not used to working in large corporations may log sensitive data elements such as access tokens or other headers that can be used to forge a malicious request that moves money, purchases a product etc.
- 3rd Parties such as a cloud provider that operates an API gateway, a lambda product or any other product that acts as a proxy to your code will have access to the entire request / response stream. They have their own logs. A malicious engineer at the cloud provider can log and alter or forge any request / response flowing through their products whenever a TLS connection is terminated by their product.
Walking through the docker-compose file
Link to compose file
Envoy Instances
As shown in the highlighted rows (lines 6 & 13), each Envoy proxy is built from dockerfiles located in their own directories. Additionally, since we have traces turned on, we mapped local volumes (lines 8 & 15) into the Envoy containers to capture and expose the captured requests and responses. For more information on how to set this up, there is a getting started guide for it.
1version: "3.7"
2services:
3
4 front-envoy:
5 build:
6> context: ./front-proxy
7 volumes:
8> - ./tmp/front:/tmp/any
9 ...
10
11 service1:
12 build:
13> context: ./service1
14 volumes:
15> - ./tmp/service1:/tmp/any
16 ...
Open Policy Agent Instances
As shown in the highlighted rows (lines 3 & 15), just like the Envoy instances, each Open Policy Agent instance is built from dockerfiles located in their own directories. The policy files (lines 11 & 23) are consistently named but have different content and purposes.
1 sign:
2 build:
3# context: ./sign
4 ...
5 command:
6 - "run"
7 - "--log-level=debug"
8 - "--server"
9 - "--set=plugins.envoy_ext_authz_grpc.addr=:9191"
10 - "--set=decision_logs.console=true"
11# - "/config/policy.rego"
12
13 verify:
14 build:
15# context: ./verify
16 ...
17 command:
18 - "run"
19 - "--log-level=debug"
20 - "--server"
21 - "--set=plugins.envoy_ext_authz_grpc.addr=:9191"
22 - "--set=decision_logs.console=true"
23# - "/config/policy.rego"
24
Walking through the signing policy
Link to signing policy
As a request flows through a complex set of systems, a lot of headers are injected and / or removed on various hops. These headers might be used for:
- Routing
- Injecting or updating tracing headers
- Injecting or removing other headers that are meaningful to the proxies
Our signature needs to survive these transformations. We need to make sure we declare what should not change and then only include that in the signature. To help use communicate these pieces of information to the recipient, we will be creating a signed JSON web token to hold our HTTP request signature.
Let’s walk through some highlights of the REGO Policy file.
- We need a private signing key to ensure that the signature has not been tampered with. This is declared on line 5 but would be retrieved from a key management system in a production deployment.
- The headers that we want to remain unchanged throughout the process are declared on line 8. These include some other JWS tokens. These tokens prove the identity of the user, the application originating the request, the subject / entity that is being acting on behalf of (if applicable). Additionally, for troubleshooting purposes, we have bound the session-id and request-id into the signature to ensure traceability outside of any open tracing solutions.
Calculating a hash of the headers
- There are 3 steps required to calculate the hash of the headers.
- The first step on line 11 is to create a new object that only contains the headers of interest from the original request. The
object.filter()
rego built in function does that for us. It takes 2 parameters. The headers we want to include in the new object and the original object that we need to pull them from. - We convert the object to a JSON string on line 12,
json.marshal
. This REGO built-in function consistently orders the keys in the resulting output string. If this was not the case, then we would not be able to use REGO for this. - Finally, we calculate the hash of the headers with the
crypto.sha256()
built in function.
- The first step on line 11 is to create a new object that only contains the headers of interest from the original request. The
1package envoy.authz
2
3import input.attributes.request.http as http_request
4
5signingKey = { ... } // Private Key for creating the signature
6
7// Headers that we would like included in the signature
8criticalHeaders = ["actor-token", "app-token", "subject-token", "session-id", "request-id"]
9
10// We calculate the header hash by ...
11filteredHeaders = object.filter( http_request.headers, criticalHeaders )
12headerString = json.marshal( filteredHeaders )
13headerHash = crypto.sha256( headerString )
Calculating a Hash of the Body
- We need to ensure the body is initialized before calculating a hash for it. Line 16 sets the body to an empty string.
- If
http_request.body
on line 18 is missing then the default value (empty string) is assigned to the body variable. - Then we can calculate the hash of the body using the
crypto.sha256( body )
built in function on line 21
14// Ensure the request body is initialized and fetch the request's body if one is present
15default body = ""
16body = b {
17 b := http_request.body
18}
19
20// Then calculate the hash for the request body
21bodyHash = crypto.sha256( body )
Communicating our signature
Now that we have the hash of our headers and body, we need to gather all of the information that we want to put into our signature. We will lock all of this information together by binding it into a JWS token. We will use some standard claims and create some custom claims as well.
22requestDigest = {
23 "iss": "apigateway.example.com",
24 "aud": [ "protected-api.example.com"],
25 "host": http_request.host,
26 "method": http_request.method,
27 "path": http_request.path,
28 "created": time.now_ns(),
29 "headers": criticalHeaders,
30 "headerDigest": headerHash,
31 "bodyDigest": bodyHash
32}
33
34digestHeader = io.jwt.encode_sign({ "typ": "JWT", "alg": "RS256" }, requestDigest, signingKey )
- The system sending the request identifies itself as the issuer (line 23).
- We can also specify the recipient of the request (line 24).
- Since the JWS is signed, we can simply copy the host, method, & resource path (lines 25-27).
- A timestamp (line 28) informs the recipient of how long the token has been in transit. The recipient can determine how long to honor a request.
- The recipient needs to know which headers (line 29) were included in the header hash and of course it needs to know the hash (line 30).
- The recipient also needs to know what the hash of the body is (line 31).
- Finally, we create our JWS by passing our JWT object, our signing key and some parameters to specify the signing algorthim into the io.
jwt.encode_sign()
built in function.
Handing off to Envoy to add the signature to the request
With the signature calculated, all that remains to be done is to attach it to the outbound request.
- As we learned previously, when using the external authorization feature, we can tell Envoy to insert or update headers by setting the header name and its value inside a headers object. In our case, we don’t want to interfere with any other authorization mechanisms that are in use in the environment. So, instead of putting our signature in the authorization header, we will create a Digest header (line 38).
- Our OPA policy wasn’t evaluating any real authorization rules. So we will always set the
allowed
variable to true (line 36). - We could implement this as a service mesh sidecar to sign all outbound requests on behalf of an application.
- In a front proxy use case, we most likely would have run other rules before signing the request and forwarding it to it’s ultimate destination.
35allow = {
36 "allowed": true, # Outbound requests are always allowed. This policy simply signs the request
37 "headers": {
38 "Digest": digestHeader
39 }
40}
Tests for the signing policy
Link to signing policy tests
The signatures should work equally well whether each of the fields we are signing is present and populated, present but empty or missing entirely. This will ensure that if a field is optional, the signature will still be created and if a fake value is inserted it can still be detected.
For each of these use cases we the precalculated the hashes for a given input.
- The
fullyPopulated
variable (line 5) is used to simulate the input received from Envoy. - The
fullyPopulatedJws
variable (line 6) is the precalculated JWS token that represents our signature. - We can directly refer to variables in our main REGO policy from their associated tests.
- On lines 11, 15 and 19 we test to see if those variables contain our expected results.
- This same pattern is repeated for the ‘empty’ and ‘missing’ use cases.
1package envoy.authz
2
3fullyPopulated = { ... }
4fullyPopulatedJws = {
5 "bodyDigest": "60009cec5b535270a0b8389cea67c894fae9549c17b2ceef8f824cde3a10b14e",
6 "headerDigest": "87329ba8383ff39b40746aa22e8d4ee58facc5ac470cac410efb6e549f7574fb",
7}
8
9# Fully populated request
10test_fully_populated_req_bodyHash_matches {
11 bodyHash == fullyPopulatedJws.bodyDigest with input as fullyPopulated
12}
13
14test_fully_populated_req_headerHash_matches {
15 headerHash == fullyPopulatedJws.headerDigest with input as fullyPopulated
16}
17
18test_fully_populated_req_allowed {
19 allow.allowed with input as fullyPopulated
20}
Validating the signature upon Receipt
Link to signature verification policy
Extracting the signature from the request
The first part of the policy:
- Extracts the digest from the incoming request header
- Validates the JWS Digest token
- Places the contents of the validated token in a variable for other rules
1package envoy.authz
2
3import input.attributes.request.http as http_request
4import input.attributes.request.http.headers["digest"] as digest
5
6jwks = `{...}`
7
8verified_digest = v {
9 [isValid, _, payload ] := io.jwt.decode_verify( digest,
10 {
11 "cert": jwks,
12 "aud": "apigateway.example.com" // <-- Required since the token contains an `aud` claim in the payload
13 })
14 v := {
15 "isValid": isValid,
16 "payload": payload
17 }
18}
- Line 4: Extracts the request’s signature token from the Digest header
- Line 8: If validated, sets a variable to hold the decoded token payload
- Line 9: Decodes and validates the token using the
io.jwt.decode_verify()
built-in function - Line 11: Provides the decode function the public keys it needs to validate the signature
- Line 12: We must provide an expected audience claim when the token contains an audience claim. The audience claim is an array. If any member of that array matches the supplied audience then that validation rule will pass.
- Line 14: Assigns the decoded token to named properties in an object that in turn is assigned to the
verified_digest
Comparing the request to the validated token
Now we can do the important part and compare the values in the request with what was preserved in the JWS token.
19headersMatch {
20 headerHash == verified_digest.payload.headerDigest
21}
22bodiesMatch {
23 bodyHash == verified_digest.payload.bodyDigest
24}
25hostsMatch {
26 http_request.host == verified_digest.payload.host
27}
28methodsMatch {
29 http_request.method == verified_digest.payload.method
30}
31pathsMatch {
32 http_request.path == verified_digest.payload.path
33}
- Line 20: The list of critical headers from the JWS token is used to filter the request’s headers and calculate a hash using the same statements that were used in the signing process (not shown). If they match then
headersMatch
is set to true. - Line 23: The request’s body is extracted and used to calculate a hash using the same statements that were used in the signing process (not shown). If they match then
bodiesMatch
is set to true. - Lines 26, 29 & 32: Since these values are captured directly in the token, no special processing is required. They are simply compared to see if they have been altered.
Checking Request Recency
We don’t want to leave an unbounded amount of time for a request to be processed. The next section of the policy checks to see if the request was initiated recently enough.
34requestDuration = time.now_ns() - verified_digest.payload.created
35
36withinRecencyWindow {
37 requestDuration < 34159642955430000
38}
- Line 34: uses the built-in function
time.now_ns()
to calculate the time passed since the request was initiated. - Line 37: Uses the calculated request duration and compares it to our business rule
Generating Rule Failure Messages
An this side of the connection our messages need 2 conditions to properly know which rules failed.
39messages[ msg ]{
40 verified_digest.isValid
41 not headersMatch
42 msg := {
43 "id" : "7",
44 "priority" : "5",
45 "message" : "Critical request headers do not match signature"
46 }
47}
- Line 40: If the digest token is simply missing or corrupt, we don’t want to return messages for every single rule that failed. If the signature token is missing entirely, the issue isn’t really that the headers don’t match. The real issue is that the signature token is missing. So, this guard rule makes sure that we at least had a valid signature token before we check to see if the headers didn’t match.
- Line 41: If the headers don’t match then we return message indicating that.
- Numerous other similar rules are in the full policy file (not shown here) testing for similar conditions.
Calculating the final decision on the validity of the request
Lines 49 through 55 below calculate the final decision on the validity of the request. When we name our rules intuitively, the decision logic is pretty intuitive as well. In this case the methods, paths, hosts, headers and bodies must all
48default decision = false
49decision {
50 methodsMatch
51 pathsMatch
52 hostsMatch
53 headersMatch
54 bodiesMatch
55 withinRecencyWindow
56}
57
58default headerValue = "false"
59headerValue = h {
60 decision
61 h := "true"
62}
63
64allow = {
65 "allowed": decision,
66 "headers": {
67 "Valid-Request": headerValue
68 },
69 "body" : json.marshal({ "Authorization-Failures": messages })
70}
Line 69: OPA’s built-in function json.marshal()
converts all of the rule failures into a string for sending back to the caller.
The other statements should be familiar from our previous discussion of Envoy’s external authorization contract. So, we won’t break that down again.
The signature verification policy tests are more complicated than the tests for the signing process due to increased complexity of the policy.
Running the Example
Simply run the ./demonstrate_sign_verify.sh
script to see request signing and verification in action. The example scripts leverage the same logs, taps and traces configuration as Getting started guide # 8.
Link to script to run the request signing & verification example
Congratulations
We have completed our example to demonstrate how to sign and validate HTTP requests. This capability is very powerful and effective and protecting the integrity of transactions. The best part about this approach is that this powerful capability can be added to any application in your portfolio without requiring code changes to every system.