#infrastructure
4 posts

4 common misconceptions about AWS auto-scaling

It is a myth that elastic scaling is more common than fixed-size auto-scaling. Maintaining the templates & scripts required for the auto-scaling process to work well takes a significant time investment.
Read more

4 common misconceptions about AWS auto-scaling

  • Do you think auto-scaling is easy? No, it is not. Maintaining the templates & scripts required for the auto-scaling process to work well takes a significant time investment.
  • It is a myth that elastic scaling is more common than fixed-size auto-scaling. Most useful aspects of auto-scaling focus on high-availability and redundancy instead of elastic load scaling.
  • A common misconception is that load-based auto-scaling is appropriate in every environment. Some cloud deployments will be better off without auto-scaling or on a limited basis.
  • There's a delicate balance between perfect base images and lengthy configurations that need to be run in an auto-scale event. This depends on how fast the instance needs to be spun up, how often auto-scaling events happen, the average life of an instance, etc.

Full post here, 6 mins read

When AWS autoscale doesn't

If the ratio of your actual metric value to target metric value is low, the maximum magnitude of a scale out event will get significantly limited.
Read more

When AWS autoscale doesn't

  • Scaling speed is limited. If the ratio of your actual metric value to target metric value is low, the maximum magnitude of a scale out event will get significantly limited.
  • Short cooldown periods can cause over-scaling or under-scaling because a scaling event may trigger before a previous scaling event has concluded.
  • Target tracking autoscaling works best in situations where at least one ECS service or CloudWatch metric is directly affected by the running task count, the metrics are bounded, or relatively stable and predictable.
  • The best way to find the right autoscaling strategy is to test it in your specific environment and against your specific load patterns.

Full post here, 8 mins read

Autoscaling AWS Step Functions activities

Transactional flows are an ideal use case for auto-scaling because of unused compute capacity during non-peak hours.
Read more

Autoscaling AWS Step Functions activities

  • Transactional flows are an ideal use case for auto-scaling because of unused compute capacity during non-peak hours.
  • When you need to detect any scaling-worthy events, AWS components like Step Functions Metrics and Cloudwatch Alarms come in handy.
  • Support a scale-down cool-off time to prevent two consecutive scale-down actions within a certain amount of time.
  • Guard your system against any malicious, delayed, or duplicated scaling notifications by validating incoming scaling signals.
  • Review historical statistics for scale-down alarms so that they’re less susceptible to triggers and never occur during peak hours.
  • For a safe rollout, increment steps till you gradually reach the ideal minimal instance count.

Full post here, 5 mins read

Serverless: 15% slower & 8x more expensive

The “Pay for what you use" always sounds like a cheaper option but you must always do the maths for your use case before making any switch.
Read more

Serverless: 15% slower & 8x more expensive

Context (read use case) is always the king when it comes to statements like this. Einar Egilsson of CardGames.io wanted to see if hosting their API using serverless framework made sense & gave it a go. Turns out that for some workloads serverless frameworks can get a lot more expensive than the old & traditional options.

He set up two instances to route requests for one of their games - one on the new serverless setup and another one on their old Beanstalk setup - to compare performance. Einar ran 100 requests (one database call per request) for both the setups and found the serverless setup to be 15% slower.

He was paying about $164.21 per month with the Beanstalk setup.The new serverless setup costed much more than what he expected. Their API accepts around 10 million requests a day which translated to about $35 per day, just for API Gateway. Add the Lambda cost of about $10 a day and he was at a cost of $45 per day or $1350 per month - about 8 times more than the $164 per month of the start state setup.

Two takeaways from this:

  • Decide within the context. Stay away from blanket solutions. Use cases matter the most.
  • The “Pay for what you use" always sounds like a cheaper option but you must always do the maths for your use case before making any switch.

Check out the full post here.

6 min read