Background jobs and some intrastructure info

In the past we had an app on heroku and we were using sidekiq and redis. A pretty common setup, it was a small application and we started with the free tier meeting our needs. The application started to get success and the background jobs started to increase.

Free tier and it's limits

The free redis tier on heroku wasn't enough anymore, it gives 25MB and 20 connections. The app get accessed to it under a public endpoint instead of a private network. This is probably a redis setup with an EC2 instance behind the curtains and exposed. But for sure you don't have a VPC. The funnier thing is that a simple VPC on heroku costs 1000$ per month!

So heroku redis gets "pricy" very fast and it doesn't offer much horsepower. The time of this writing the cost for 15$ is 50MB and 40 connections. And the next packages increases pretty much in a linear way. In our setup, the problem was with the concurrency. The 40 connections limit was too small. Our database could afford more connections but redis had a limit. In comparison a simple small RDS can afford up to 196 connections. This was a bit unfair, if you get a VPS with 15$ and you setup redis, you don't have any connection limitations. Let's not talk about other resources like size. But of course you have to maintain and upgrade everything by yourself, that's the real cost of it.



AWS which is our go-to solution offers two alternatives. One is memoryDB and the other is elasticache. Elasticache is the cheap one and it's better for use cases like sidekiq queues. The only problem is that AWS doesn't offer a public endpoint as a service. You can't create, connect and forget, you need somehow to expose it in public. This makes sense as you get many disadvantages by exposing publicly your redis. Still if you have a small application, then I think it is an ok-ish solution to expose redis endpoint.

Of course the best would be to transfer on a cloud service like AWS our whole infrastructure. Then you get a VPC which is like a super fast local network and everything is super. But we wanted something more plug 'n' play and play. It was a startup. We didn't want to spend time to transfer all the addons and many other configurations from heroku now. We hoped for a bigger business success, then hire a devop and let him decide what's best.

So, here comes the other alternative we used. The name of it is offers exactly the same as heroku for redis and you can expose it. You can replace the configuration, connect and forget. And the best thing is that it much less expensive. For 10$ you get 256 connections and 256MB. Still not the best, but you can live a bit longer without paying a fortune. You can wait a bit more for the real success to come before going on something more complicated.


What are you using in a small application which got some more exposure and success? Are you immediately transfer it to a cloud service or you are lazy and do it piece by piece like we did?