Subject Mapping and Traffic Shaping

    Subject mapping is a very powerful feature of the NATS server, useful for canary deployments, A/B testing, chaos testing, and migrating to a new subject namespace.

    The stanza can occur at the top level to apply to the global account or be scoped within a specific account.

    The example of foo:bar is straightforward. All messages the server receives on subject foo are remapped and can be received by clients subscribed to bar.

    Wildcard tokens may be referenced via $<position>. For example, the first wildcard token is $1, the second is $2, etc. Referencing these tokens can allow for reordering.

    With this mapping:

      Traffic can be split by percentage from one subject to multiple subjects. Here’s an example for canary deployments, starting with version 1 of your service.

      Applications would make requests of a service at . The responders doing the work of the server would subscribe to myservice.requests.v1. Your configuration would look like this:

      All requests to myservice.requests will go to version 1 of your service.

      When version 2 comes along, you’ll want to test it with a canary deployment. Version 2 would subscribe to myservice.requests.v2. Launch instances of your service (don’t forget about queue subscribers and load balancing).

      Update the configuration file to redirect some portion of the requests made to myservice.requests to version 2 of your service. In this case we’ll use 2%.

      1. myservice.requests: [
      2. { destination: myservice.requests.v1, weight: 98% },
      3. { destination: myservice.requests.v2, weight: 2% }
      4. ]

      Once you’ve determined Version 2 stable switch 100% of the traffic over and reload the server with a new configuration.

      Now shutdown the version 1 instances of your service.

      Traffic shaping is useful in testing. You might have a service that runs in QA that simulates failure scenarios which could receive 20% of the traffic to test the service requestor.

      1. myservice.requests.*: [
      2. { destination: myservice.requests.fail.$1, weight: 20% }
      3. ]

      Alternatively, introduce loss into your system for chaos testing by mapping a percentage of traffic to the same subject. In this drastic example, 50% of the traffic published to foo.loss.a would be artificially dropped by the server.

      You can both split and introduce loss for testing. Here, 90% of requests would go to your service, 8% would go to a service simulating failure conditions, and the unaccounted for 2% would simulate message loss.

      1. myservice.requests: [
      2. { destination: myservice.requests.v3, weight: 90% },
      3. { destination: myservice.requests.v3.fail, weight: 8% }
      4. ]