Deploy CockroachDB on Google Cloud Platform GCE (Insecure)

On this page Carat arrow pointing down
Warning:
As of November 10, 2018, CockroachDB v1.0 is no longer supported. For more details, refer to the Release Support Policy.

This page shows you how to manually deploy an insecure multi-node CockroachDB cluster on Google Cloud Platform's Compute Engine (GCE), using Google's TCP Proxy Load Balancing service to distribute client traffic.

Warning:
If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.

Requirements

You must have SSH access to each machine with root or sudo privileges. This is necessary for distributing binaries and starting CockroachDB.

Recommendations

  • If you plan to use CockroachDB in production, we recommend using a secure cluster instead. Using an insecure cluster comes with risks:

    • Your cluster is open to any client that can access any node's IP addresses.
    • Any user, even root, can log in without providing a password.
    • Any user, connecting as root, can read or write any data in your cluster.
    • There is no network encryption or authentication, and thus no confidentiality.
  • For guidance on cluster topology, clock synchronization, and file descriptor limits, see Recommended Production Settings.

  • Decide how you want to access your Admin UI:

    • Only from specific IP addresses, which requires you to set firewall rules to allow communication on port 8080 (documented on this page).
    • Using an SSH tunnel, which requires you to use --http-host=localhost when starting your nodes.

Step 1. Configure your network

CockroachDB requires TCP communication on two ports:

  • 26257 (tcp:26257) for inter-node communication (i.e., working as a cluster)
  • 8080 (tcp:8080) for exposing your Admin UI

Inter-node communication works by default using your GCE instances' internal IP addresses, which allow communication with other instances on CockroachDB's default port 26257. However, to expose your admin UI and allow traffic from the TCP proxy load balancer and health checker to your instances, you need to create firewall rules for your project.

Creating Firewall Rules

When creating firewall rules, we recommend using Google Cloud Platform's tag feature, which lets you specify that you want to apply the rule only to instance that include the same tag.

Admin UI

Field Recommended Value
Name cockroachadmin
Source filter IP ranges
Source IP ranges Your local network's IP ranges
Allowed protocols... tcp:8080
Target tags cockroachdb

Application Data

Applications will not connect directly to your CockroachDB nodes. Instead, they'll connect to GCE's TCP Proxy Load Balancing service, which automatically routes traffic to the instances that are closest to the user. Because this service is implemented at the edge of the Google Cloud, you'll need to create a firewall rule to allow traffic from the load balancer and health checker to your instances. This is covered in Step 3.

Warning:
When using TCP Proxy Load Balancing, you cannot use firewall rules to control access to the load balancer. If you need such control, consider using Network TCP Load Balancing instead, but note that it cannot be used across regions. You might also consider using the HAProxy load balancer (see Manual Deployment for guidance).

Step 2. Create instances

Create an instance for each node you plan to have in your cluster. We recommend:

  • Running at least 3 nodes to ensure survivability.
  • Selecting the same continent for all of your instances for best performance.

If you used a tag for your firewall rules, when you create the instance, select Management, disk, networking, SSH keys. Then on the Networking tab, in the Network tags field, enter cockroachdb.

Step 3. Set up TCP Proxy Load Balancing

Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing:

  • Performance: Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second).

  • Reliability: Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes.

GCE offers fully-managed TCP Proxy Load Balancing. This service lets you use a single IP address for all users around the world, automatically routing traffic to the instances that are closest to the user.

Warning:
When using TCP Proxy Load Balancing, you cannot use firewall rules to control access to the load balancer. If you need such control, consider using Network TCP Load Balancing instead, but note that it cannot be used across regions. You might also consider using the HAProxy load balancer (see Manual Deployment for guidance).

To use GCE's TCP Proxy Load Balancing service:

  1. For each zone in which you're running an instance, create a distinct instance group.
    • To ensure that the load balancer knows where to direct traffic, specify a port name mapping, with tcp26257 as the Port name and 26257 as the Port number.
  2. Add the relevant instances to each instance group.
  3. Configure TCP Proxy Load Balancing.
    • During backend configuration, create a health check, setting the Protocol to HTTP, the Port to 8080, and the Request path to /health. If you want to maintain long-lived SQL connections that may be idle for more than tens of seconds, increase the backend timeout setting accordingly.
    • During frontend configuration, reserve a static IP address and choose a port. Note this address/port combination, as you'll use it for all of you client connections.
  4. Create a firewall rule to allow traffic from the load balancer and health checker to your instances. This is necessary because TCP Proxy Load Balancing is implemented at the edge of the Google Cloud.
    • Be sure to set Source IP ranges to 130.211.0.0/22 and 35.191.0.0/16 and set Target tags to cockroachdb (not to the value specified in the linked instructions).

Step 4. Start the first node

  1. SSH to your instance:

    $ ssh <username>@<node1 external IP address>
    
  2. Install the latest CockroachDB binary:

    # Get the latest CockroachDB tarball.
    $ curl https://binaries.cockroachdb.com/cockroach-v1.0.7.linux-amd64.tgz
    
    # Extract the binary.
    $ tar -xzf cockroach-v1.0.7.linux-amd64.tgz  \
    --strip=1 cockroach-v1.0.7.linux-amd64/cockroach
    
    # Move the binary.
    $ sudo mv cockroach /usr/local/bin/
    
  3. Start a new CockroachDB cluster with a single node:

    $ cockroach start --insecure \
    --background
    

Step 5. Add nodes to the cluster

At this point, your cluster is live and operational but contains only a single node. Next, scale your cluster by setting up additional nodes that will join the cluster.

  1. SSH to your instance:

    $ ssh <username>@<additional node external IP address>
    
  2. Install CockroachDB from our latest binary:

    # Get the latest CockroachDB tarball.
    $ curl https://binaries.cockroachdb.com/cockroach-v1.0.7.linux-amd64.tgz
    
    # Extract the binary.
    $ tar -xzf cockroach-v1.0.7.linux-amd64.tgz  \
    --strip=1 cockroach-v1.0.7.linux-amd64/cockroach
    
    # Move the binary.
    $ sudo mv cockroach /usr/local/bin/
    
  3. Start a new node that joins the cluster using the first node's internal IP address:

    $ cockroach start --insecure \
    --background \
    --join=<node1 internal IP address>:26257
    
  4. Repeat these steps for each instance you want to use as a node.

Step 6. Test your cluster

CockroachDB replicates and distributes data for you behind-the-scenes and uses a Gossip protocol to enable each node to locate data across the cluster.

To test this, use the built-in SQL client as follows:

  1. SSH to your first node:

    $ ssh <username>@<node1 external IP address>
    
  2. Launch the built-in SQL client and create a database:

    $ cockroach sql --insecure
    
    > CREATE DATABASE insecurenodetest;
    
  3. In another terminal window, SSH to another node:

    $ ssh <username>@<node3 external IP address>
    
  4. Launch the built-in SQL client:

    $ cockroach sql --insecure
    
  5. View the cluster's databases, which will include insecurenodetest:

    > SHOW DATABASES;
    
    +--------------------+
    |      Database      |
    +--------------------+
    | crdb_internal      |
    | information_schema |
    | insecurenodetest   |
    | pg_catalog         |
    | system             |
    +--------------------+
    (5 rows)
    
  6. Use CTRL-D, CTRL-C, or \q to exit the SQL shell.

Step 7. Test load balancing

The GCE load balancer created in step 3 can serve as the client gateway to the cluster. Instead of connecting directly to a CockroachDB node, clients connect to the load balancer, which will then redirect the connection to a CockroachDB node.

To test this, install CockroachDB locally and use the built-in SQL client as follows:

  1. Install CockroachDB on your local machine, if it's not there already.

  2. Launch the built-in SQL client, with the --host flag set to the load balancer's IP address:

    $ cockroach sql --insecure \
    --host=<load balancer IP address> \
    --port=<load balancer port>
    
  3. View the cluster's databases:

    > SHOW DATABASES;
    
    +--------------------+
    |      Database      |
    +--------------------+
    | crdb_internal      |
    | information_schema |
    | insecurenodetest   |
    | pg_catalog         |
    | system             |
    +--------------------+
    (5 rows)
    

    As you can see, the load balancer redirected the query to one of the CockroachDB nodes.

  4. Check which node you were redirected to:

    > SELECT node_id FROM crdb_internal.node_build_info LIMIT 1;
    
    +---------+
    | node_id |
    +---------+
    |       1 |
    +---------+
    (1 row)
    
  5. Use CTRL-D, CTRL-C, or \q to exit the SQL shell.

Step 8. Monitor the cluster

View your cluster's Admin UI by going to http://<any node's external IP address>:8080.

On this page, verify that the cluster is running as expected:

  1. Click View nodes list on the right to ensure that all of your nodes successfully joined the cluster.

  2. Click the Databases tab on the left to verify that insecurenodetest is listed.

Tip:
You can also use Prometheus and other third-party, open source tools to monitor and visualize cluster metrics and send notifications based on specified rules. For more details, see Monitor CockroachDB with Prometheus.

Step 9. Use the database

Now that your deployment is working, you can:

  1. Implement your data model.
  2. Create users and grant them privileges.
  3. Connect your application. Be sure to connect your application to the GCE load balancer, not to a CockroachDB node.

See Also


Yes No
On this page

Yes No