The cloud is supposed to make things easier, but there are still lots of hoops to jump through without getting tripped up. This guide is written from my own hard-earned experience on a recent project. I hope it can help you.
You’ve got a nice Phoenix app working perfectly in its own ecosystem, maybe on Fly.io and their easy-to-use postgres setup. But you want to mix things up.
Let’s say you want to use GCP SQL postgres. Maybe you don’t want to manage your own db. Or maybe you want a second db for analytics. Maybe you are using DBT and want a centralized cloud db solution.
This part should be easy. You can mostly just follow the prompts in the GCP SQL console or follow the getting started docs.
You’ve got options. Too many options. You could just use a public IP address and not require SSL. Don’t do that.
Here are some better options:
I recommend the auth proxy. By the way, DBT (both cloud and core) doesn’t do SSL connections apparently, so there you go. Other BI tools like Mode do, but don’t let you configure certificates (as far as I could tell).
If you stubbornly still wish to pursue the SSL route, be sure to check out the footnote at the bottom for a healthy dose of the pain I experienced, and a few useful tidbits to guide you through the process*.
On your Cloud SQL instance page, head over to the “Connections" sidebar tab, then the Security tab.", then the Security tab. Make sure to check “Allow only SSL connections.” Don’t add any certs. On the Networking tab, you can go with a Public or Private IP (private requires a config in the proxy), but don’t authorize any networks.
Follow the setup instructions on the auth proxy repo, they are good. For more confusing instructions check out the GCP docs.
You’ve basically got two options, download the cli tool and run it in the background, or use the docker version of it. Both work fine and have similar configs that you can easily get from the Cloud SQL overview page (if you use docker, make sure to set –address 0.0.0.0). Pro tip: configure the port to be something other than 5432 so that you can have local postgres running at the same time. You’ll probably authorize with the service account you made, which is the -c flag to point to the downloaded key file, or the -j flag with something like “$(cat /path/to/keyfile)” (don’t forget the double quotes or you’ll be scratching your head for a while. Or maybe not at first, because it doesn’t need them in your local environment, but it does in the docker release!).
With the proxy running, just point your DBeaver or pgAdmin or psql or ecto or dbt to localhost at the port you set with the db name, user, and password and you are done (no SSL). The proxy will intercept the call and wrap it in a special encrypted tunnel it established with the database.
This isn’t too hard but there are a few ways of doing it. I kept it simple and added it to the Dockerfile:
"Then I use an entrypoint.sh script to kick everything off:
Where the script is:
This seems to work fine, and now you can point your ecto config to the proxy just like in local. This is important though - don’t use SSL and don’t use ipv6 or you’ll get `:nxdomain` errors.
You’ll need to put the database url in a fly secret (in the format `ecto://<user>:<password>@127.0.0.1:<auth proxy port>/<db name>`), along with the SQL connection name (maybe that doesn’t need to be a secret, but why not?). You also need the service account key JSON contents in a secret, and this is where the unix tricks mentioned in the footer can help. With that, you should be good to go! Check the fly logs and the log file set up in the entrypoint script if things aren’t working for some clues.
Hopefully you now can connect to GCP Cloud SQL securely from any client you please with ease and minimal frustration!
I went with the SSL and certs approach first because it seemed like a very reasonable approach, right? And it wasn’t hard at first, at least not locally:
No problem. Ok, now how to connect my local DBT? Oh right, they don’t do SSL. Neither does DBT cloud. Ok, go turn off “Require SSL” in the Cloud SQL “Security” tab and cringe a little.
Now let’s get Ecto set up. The db url is pretty straightforward. Set `ssl: true`. Set a bunch of erlang’s ssl configs to point to the certs and deal with validating the certificate authority. But here’s where things get painful.
GCP SQL wants a “validate-ca” level of validation via psql, which validates the whole chain up to the host machine but stops there, while “validate-full” includes the host machine. In this case, the host machine is just a public IP that GCP created and it fails, so you don’t want that. But the Ecto setup only offers `:validate_none` or `:validate_peer`, which is more like psql’s validate-full and doesn’t work. So you end up doing a lot of funny stuff with the Erlang ssl and friends libs (see previous link), which still don’t quite work as advertised, so you write your own validate_fun and hope that you didn’t go against too many OWASP RFCs and cringe some more, worried about man in the middle attacks (even though GCP support staff claim you don’t have to worry about MITM.). Ultimately, “certificate pinning” is probably your best bet, but by now you’ve reached that “regret your life decisions” phase and give up.
But even if you didn’t give up yet you still have to get this working on fly.io. The big question is where you put those sensitive certificate files. Now GCP also offers its own secrets vault, but that’s too confusing to figure out right now, and fly.io has a system for secrets, but, will it work for things like certificates that are long and have important whitespace? It turns out there are some clever unix tricks to make it work but remember that you have already given up by now. Also, you realize that you’d need to add your fly.io public IP as an authorized network, but fly.io uses shared IPv4 addresses, which would add other apps on fly that share your IP into the allowed network, and you cringe even further. (You can request a dedicated IPv4 address at a marginal cost, so this could be avoided).
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere. uis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.
Delete