Hybrid Deployment Example Using AWS & A Raspberry Pi

Hybrid Deployment Example Using AWS & A Raspberry Pi

A hybrid deployment model combines the use of both on-premises and cloud infrastructure to create a flexible and scalable computing environment. In this blog I'll demonstrate a basic Hybrid deployment using AWS Cloud and A Raspberry Pi I have in my house. But first of all, lets outline a few of the key benefits of hybrid deployments in the real world.

  1. Cost Savings: When deploying a hybrid model, organisations are able to make use of the effectiveness of certain cloud computing functions whilst keeping critical data and/or applications on-premises, also allowing for cost savings in hardware & energy costs.
  2. Scalability & Control: By running applications in a hybrid cloud model, business gain a greater level of control over their data, i.e where it is stored and if demand increases suddenly they can leverage the cloud to quickly scale up their workloads.
  3. Business Continuity: Deploying in to a hybrid cloud can help improve business continuity and reduce downtime which would have resulted in additional cost. This is because business critical data could be replicated to the cloud for example or on-demand cloud computing can be scaled in the event of a spike in demand, which could have overloaded the businesses private servers.

Overall, a hybrid deployment model has the possibility to provide a balance of scalability, flexibility, and cost-effectiveness, making it an attractive option for many organisations.

I will now demonstrate what a hybrid deployment may look like in CockroachDB when running a small and simple cluster spanning AWS & A local Raspberry Pi. I will provide links to relevant documentation for most of this setup to keep this post from getting too long.

The first thing I needed to do was figure out the best way to enable connectivity between my Raspberry Pi and the AWS Network, specifically the subnets available within a VPC I'd already created. I decided that the most secure way was to use a Site-Site-VPN between my home network and AWS, unfortunately this configuration is specific to your hardware, however for the purposes of this blog I will share the guide I used in setting up my VPN to AWS. (https://mjasion.pl/posts/cloud/how-to-setup-aws-site-to-site-vpn-with-unifi-udm/)

If you've set up a VPN similar and would like to test that your configuration & connection is working, a good way of doing this is to spin up an EC2 instance in AWS and ping back/fourth a device on your local network.

For reference the CIDRs I am using are below:

Remote CIDR - 10.10.0.0/16 (AWS Network)

Local CIDR - 192.168.0.0/24 (Home Network)

For testing purposes I deployed an EC2 instance with a security group allowing ping inbound/outbound and pinged to and from my local machine. If the VPN Connection is working correctly, you'll receive responses from both sides.

Now that we have verified connectivity between home (on-prem) and the cloud we can now look at setting up CockroachDB to run across these instances.

First of all, I create 2 EC2 instances both in different arability zones just for best practice, the machine type chosen was c5.large, as that gives me the closest match to my Raspberry Pi (2 vCPU 4Gb Memory)

At this point I also booted the Raspberry Pi and ensured it was online and connected to the network, so now we have 3 machines which I needed to install CockroachDB on, following the instructions here I was able to install everything required to get going - https://www.cockroachlabs.com/docs/v22.2/install-cockroachdb-linux (If you're following along, you'll need to use the ARM Binary for the Raspberry Pi)

In order to spread the load across the 3 nodes I needed to create a Network LoadBalancer, the way I did this differed slightly from the documentation found here - https://www.cockroachlabs.com/docs/stable/deploy-cockroachdb-on-aws.html#step-4-set-up-load-balancing but it remains largely the same, so you can follow it through until you get to the point of creating the target group, rather than add instances to the group, I needed to add IP addresses instead. See below for an example.

At this point the target group is unhealthy which is expected, as Cockroach isn't currently running on any of these nodes, they're just idle with the binary installed, ready to be configured.

I then referred back to the documentation to install and configure CockroachDB on AWS - https://www.cockroachlabs.com/docs/stable/deploy-cockroachdb-on-aws.html I also followed this for configuring and running CockroachDB on my Raspberry Pi too, as it can communicate locally via the VPN. When following the instructions to create the certificates for the nodes, it's important to ensure you populate the IP and DNS name of the loadbalancer and optionally a FQDN if you're pointing a domain name to it. (An easy way of obtaining the IP of the loadbalancer is to simply curl or nslookup the dns record)

Once I'd completed the installation and initialisation of the database, I tested the connectivity to the database using the local cockroach binary on my MacBook (where I created the certs and ran the setup)

I used the LoadBalancer IP address, but as each CockroachDB node acts as a single logical gateway to the database you could use any of the nodes IP addresses here, but this is a good way of verifying the loadbalancer is working correctly, likewise it's worth checking the health of the target group in the AWS UI too, to ensure that all nodes are healthy following the setup.

From here I created myself a user using the CREATE user command  and granted myself admin privileges using the GRANT admin command.

From here, I opened up a browser and navigated to one of the nodes IP addresses, logged in with my new credentials and could now see I have a 3 node cluster spanning AWS & my home. (Optionally you can set up another target group for loadbalancing the UI on port 8080)

Overall, this is a very basic and experimental way of deploying a hybrid environment using AWS & CockroachDB, but demonstrates how quickly you can get going with an environment and from here you can start experimenting with more advanced features of the database such as geo-partitioning.