It's not always your main app should handle all things. Get to know sidecar agents, the quiet workhorse your backend probably needs.


The current app doesn't have the capability to compile the config file. And it should not too!





Then I remember we use containerization in our deployment. That forces us to think, whether we should put the sidecar within the same container as the app, or should be it separated?

Sidecar is self-explanatory:

It's another program that runs in the same machine as your app. It's way smaller, serves supporting-features, minimal CPU footprint, and low memory utilization.
Illustration source: The Sidecar Pattern: A Pragmatic Approach for Microservices
Bloated Code
You stare at the ceiling ... now your code repository is bloated. You said it is just a micro-service. But deep down, you know it is a monolith that maintained by 3 person. You convicted yourself the day will come, when all the tech debt items will be getting cleaned up. But every time you create a new feature branch, there are more interfaces, new repository, controller, duct tape codes ... Day one, you hate "If it works, don't touch it". Then you realized, the best you can do is too accept your fate and it becomes your religion.
You're not alone.
If you have to deal with that, keep your chin up!
Ship With Confidence
Got something fresh in mind? Is it something lightweight, kind of supporting-feature, or experimental? Sidecar pattern is your friend.
It gives you confidence and you don't have too bother if things are broken after you introduced something to the sidecar agent. Your main app is alright!
Currently in my team, we need to invoke latest deployment pipeline in order to get updated config values from AWS SSM Parameter Store. That's not ideal, since we have to wait for ~10 min each time we updated a config. You know, we all make mistakes. We could modify the same config many-many times until we get the right one in place.
Our instinct tell us to retrieve the config from DB, which is not wrong, but keys, secrets, shouldn't be stored in DB. We have to store it in secure storage like HashiCorp's Vault, AWS SSM Parameter Store.
Though storing config in DB offers flexibility, it's not suitable for secrets.
Store keys, secrets in secure storage, and let's have config hot reloader agent! Write it in a sidecar and allow your main app to focus only on business-related stuff.
What the sidecar may do?
Our current implementation requires a config file named `config.yml` to be compiled first prior to app start up.
#!/bin/sh
confd -onetime -backend ssm -prefix /project-name/$env_idDirectory structure.
├── conf.d
│ └── config.yml.toml
└── templates
└── config.yml.tmplWe may introduce the sidecar agent at this point!
Design process
I don't know why I came up with that design. It's kind of not ideal, since it requires the app to act as both gRPC client and server.
Now the app only act as gRPC client, less role! But I don't like it. The app itself still taking part as an 'active' party during the config reload process. Let's change that.
Now, I like it. Let's keep it for now, we can talk about interface or communication protocol we can use to invoke the refresh config process.
But, wait!
Wouldn't it be better? Our app most likely is a REST server, right? ~ Yeah, maybe.
Detailed Steps
Let's zoom in.
Then I remember we use containerization in our deployment. That forces us to think, whether we should put the sidecar within the same container as the app, or should be it separated?
Initially, I was thinking to spawn the sidecar within the same container with the app. But that's odd, because in our current setup, we have telemetry agent being deployed in a separate container.
But, again. Why? 🤔
- If one of the programs crashes due to memory leak, it does not bring the other down with it.
- Give you granular resource tuning for each of the program.
- Least privilege, one program has clean boundaries over the other.
- Independent update on each of the program.
- ...
Sidecar pattern is deployed as separate containers because it enforces clean, runtime-level separation of concerns between business logic and operational logic.
Then, if our hot reloader is being deployed as separate container, how it may recompile the `config.yml` of the main app?
Answer: we can mount both into the same volume.
volumes:
- name: shared-config
emptyDir: {} # or hostPath, or PVC
containers:
- name: app
volumeMounts:
- name: shared-config
mountPath: /app/config
- name: sidecar
volumeMounts:
- name: shared-config
mountPath: /sidecar/config/app/config/config.yml
/sidecar/config/config.yml
# both are actually the same file!So now: the sidecar can do the it's own process independently and ask the app to refetch the `config.yml` once it's ready.
Failure
Let's now think about failure. I'd like to know if the config successfully refreshed on the application side, hence we'll make the read process of the newly created config.file as sync process (between the sidecar and the app).
For failure in any step, it will immediately propagate the error to the client. Making it possible for the engineer to take swift compensation action.

As long as the `config.yml` is successfully compiled any other failure must be app-level and it should be handled carefully.
Wrapping Up
Any requirement is a complexity. Sometimes, eliminating a complexity basically means eliminating the requirement itself. What would you do to make life easier without sacrificing the business? You transfer it somewhere else, to make it more manageable. That's a distribution, it's a zero sum game.
That's why pattern matters if your anticipating a big, complex requirement.
Anyway~
It's a raw idea we've just discussed about, I've not written a piece of code of this. But it's kind of solving a real pain on our engineering daily activities.
It's a raw idea we've just discussed about, I've not written a piece of code of this. But it's kind of solving a real pain on our engineering daily activities.
Will keep you updated if we're deciding to write it. ✌️