Let’s consider that we have a service that runs in a virtual private cloud and from our desktop we cannot access its endpoints. But sooner or later we want to execute some calls to fetch some data or to troubleshoot something.
What’s the solution in that case?
The simplest approach is to create a tunnel and then to forward calls to that host.
ssh -i <pem file> -L 9001:localhost:80 ec2-user@<host from production fleet>
and then, in a separate tab, hit the endpoint:
Another solution that we love more is to use an intermediate host – usually we have a host used for various operational purposes – that is whitelisted to access our production environment. In order to be whitelisted, you have to put its IP into load balancer security group. In our case, we use to have a very small EC2 host (t1.micro) with an elastic IP binded to have always the same address and then we run:
ssh -fN -i <pem file> -L 8443:<load balancer dns name>:443 ec2-user@<host name or elastic ip>
curl -k https://localhost:8443/hello
Why we love more the second approach:
- It allows us to distribute calls to the entire fleet and not to a single host like the first one
- It allows us to make calls using HTTPS protocol
One disadvantage it has is that if you want to check logs, you have to ssh a host in a separate connection, while with the first approach you can do that in the same connection.
To sum up: the first way is very simple and efficient if you have to run few API calls, but if you have to do a lot of request, then definitely it’s a good idea to create the setup suggested by the second approach!
Please don’t hesitate to share this blog with anyone who might benefit from the information presented here!