Is your team planning to implement HTTP2 and have services hosted on AWS using ALB? Then you might have to consider the below tests on priority for start
By default, ALB sends requests to targets using HTTP/1.1. Refer to the Request Version and Protocol Version at https://amzn.to/3uPaxl1for sending the requests to target using HTTP/2 or gRPC.
Here are few tests to consider on priority:
- Do all my client interfaces have migrated to HTTP2?
- If not, having ALB on HTTP2 and few clients on HTTP/1.1 will make ALB use HTTP/1.1
- What time it takes to respond if I'm on HTTP2?
- What is the traffic pattern i.e. how the frames (frequency & size) delivered to the endpoint on HTT2?
- What happens to the client if there is a delay in response?
- If serving users who have low bandwidth or intermittent networks, what should be the product experience to them?
- What if you are queuing the requests and it is by the design in the system?
- What's happening at the endpoint?
- What happens when there is a POST and PUT from the client as multi-part and not as multi-part?
- When a bunch of requests is fired at once what's the size of each batch and frame in it?
- Does the client still adhere to the HTTP/1.1 style of dealing with requests and response, or upgraded to deal the HTTP2 way?
- How the timeout is handled on the server & responded to the client it handles?
- Do the libraries I use at the client and server end support HTTP2?
I read Kubernetes can do the load balancer within its clusters. The pods can talk to each other. I have not tried the load balancing tests on the K8 cluster's pods.
ReplyDeleteFrom the K8 documentation:
Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.