This is the second part of the mini-series Exasol on AWS. Here’s the first part.
Cloud UI is an extension to EXAoperation that makes it easy for you to
- Scale up & down
- Increase storage capacity
- Scale out by adding nodes to the cluster
Cloud UI can be reached by adding the port number 8835 to the URL of your License Server and uses the same credentials as EXAoperation.
Scale down to m5.large with Cloud UI
Depending on the load you get on your Exasol cluster, you can scale up your data nodes to more powerful EC2 instances if load is high and scale down to less expensive EC2 instances with lower user demands.
I started my little cluster with r5.large instances. Now I want to scale down to m5.large. Enter Cloud UI:
You see on the right site that scaling down to m5.large reduces both available memory and costs. I click on APPLY now and confirm the pop-up coming next with EXECUTE. The following steps the system goes through can be monitored in EXAoperation:
Notice that the database got restarted during that process.
Scale out by adding data nodes
I want to expand my present 1+0 cluster to a 2+1 cluster. First I add another active node:
As you see, this doesn’t only increase the overall avaible memory but also the compute power. Storage capacity is usually also increased when adding a node. In this particular case not, though, because I will also go from redundancy 1 to redundancy 2.
The log looks like this now:
My one node cluster did use redundancy 1, now I want to change that to redundancy 2. That step is of course not required when you started with a multi-node cluster using redundancy 2 already. See here for more details about redundancy in Exasol.
To increase redundancy, I go to the EXAstorage page of EXAoperation:
The new EC2 instance for the new data node can be renamed like this:
That makes it easier to identify the nodes, for example when associating elastic IPs to them. I do that now for n12 in the same way I did it with n11 before.
The elastic IPs of the data nodes must then be entered into the connection details of clients like DbVisualizer in this example:
After having added a new active node, that node is initially empty unless REORGANIZE operations are done. For example a REORGANIZE DATABASE:
I have a 2+0 cluster now: Mirrored segments on two active nodes but no reserve node.
Adding reserve nodes
To get a 2+1 cluster, I need to add a reserve node. Again, that’s quite easy to do with Cloud UI:
Within about 10 Minutes, the log should show something like this:
Notice that there was no database restart this time. The new node should get renamed and have a new elastic IP associated as shown before. Also that IP needs to be added to client connection details. See here if you wonder what reserve nodes are good for.
Now that I have got a 2+1 Exasol cluster running on AWS, I’m ready to demonstrate what happens if one node fails. That will be the next part of this series 🙂
It’s amazingly easy to run an Exasol Cluster on Amazon Web Services (AWS).
Subscribe Exasol in AWS marketplace
After having registered and having logged in to your AWS account, go to the AWS marketplace and search for Exasol:
Click on the Exasol Single Node and Cluster BYOL link and then on Continue to Subscribe:
After having reviewed the T&C, click on Accept Terms. It shows this message afterwards:
Create Key Pair
Now login to the AWS Management Console, select a region close to your location and open the EC2 Dashboard. Click on Key Pairs:
Click on Create Key Pair now and enter a name for the new Key Pair, then click on Create:
Now you are ready to use the Exasol Cloud Deployment Wizard. Stay logged in with AWS Management Console as you will be routed back there by the Deployment Wizard soon.
Using the Cloud Deployment Wizard
Put this URL into your browser: https://cloudtools.exasol.com/ and click on AWS then:
Select a region close to your location and click on Continue:
Click on Advanced Configuration and specify
License Model Bring-your-own-license, System Type Enterprise Cluster, Instance Family Memory Optimized, Instance Type r5, Instance Model r5 large, Number of DB Node 1 then click Continue.
BYOL works without license file with a limit of 20 GB memory for the database. Means no costs are charged by Exasol (But by Amazon) for this environment.
Select create new VPC and click on Launch Stack on this page now:
This takes you to the Quick create stack page of CloudFormation in AWS Management Console:
Enter these details on the page:
Key Pair (select the key pair created previously)
SYS User Password
ADMIN User Password
Public IPs (true)
Tick the acknowledge box and click on Create stack
Now go to the EC2 details page and copy the Public IP of the management node:
Put that with an https:// prefix into a browser and click on Advanced:
Then you should see a progress bar like this:
That screen changes after about 30 Minutes to the EXAoperation login screen.
Login as user admin with the password, you specified previously on the CloudFormation Quick create stack page. There should be a database running:
As you can see now, you have a database, a remote archive volume using an Amazon S3 bucket ready for backup & restore and a log service to monitor your system.
This database is limited to 20 GB memory only unless a license file is uploaded to the license server aka management node. For educational purposes, I don’t need more.
Use Elastic IPs
The public IPs of your data nodes will change upon every restart, which is probably not convenient.
Therefore, click on Elastic IPs in the EC2 dashboard, then click on Allocate new address:
Select Amazon pool then click on Allocate:
Click on the IP on the following screen:
Select the action Associate address on the next screen:
Select the data node from the Select instance pull down menu and click on Associate:
Close the next screen and go to the EC2 instance page. You should see the elastic IP assigned to the data node there:
Connect with a SQL Client to your Exasol database on AWS
This is how that looks with DbVisualizer:
And that’s it: Now you have an Exasol 1+0 cluster running on AWS. That’s not the same as a single node system, because this 1+0 cluster can be enlarged with more data nodes. I will show how to do that in future posts.
A word about costs: Instead of using our corporate AWS account, I registered myself to see how much that will take. It was less than 80 Euro with a 2+1 cluster environment I used for about one month, shutting down the EC2 instances whenever I didn’t need them for testing and for creating courseware. It should be well below 10 Euro per day with the very moderate resource consumption configured for the environment subject to my postings.
Stay tuned for some more to come about Exasol on AWS 🙂
Adding a cluster node will not only increase the available storage capacity but also the total compute power of your cluster. This scale-out is a quite common operation for Exasol customers to do.
My example shows how to change an existing 2+1 cluster into a 3+0 cluster. Before you can enlarge the database with an active node, this node has to be a reserve node first. See here how to add a reserve to a 2+0 cluster. Of course you can add another reserve node to change from 3+0 to 3+1 afterwards. See here if you wonder why you may want to have a reserve node at all.
Initial state – reserve node is present
I start with a 2+1 cluster – 2 active nodes and 1 reserve node:
For later comparison, let’s look at the distribution of rows of one of my tables:
The rows are roughly even distributed across the two active nodes.
Before you continue, it would be a good idea to take a backup on a remote archive volume now – just in case.
Shutdown database before volume modification
A data volume used used by a database cannot be modified while that database is up, so shut it down first:
After going to the Storage branch in EXAoperation, click on the data volume:
Then click on Edit:
Decrease volume redundancy to 1
Change the redundany from 2 to 1, then click Apply:
Why is the redundancy reduced from 2 to 1 here? Let’s try to explain that. Initially, I had 2 active nodes with a volume using redundancy 2:
A and B are master segments while A’ and B’ are mirrored segments. If I could add a node to this volume keeping the existing segments, it would look like this:
Of course this would be a bad idea. The redundancy is reduced to 1 before the new node is added to the volume:
Only distributed master segments with no mirrors at first. Then the redundancy is again increased to 2:
This way, every master segment can be mirrored on a neighbor node. That’s why the redundancy needs to be reduced to 1.
Add new node to volume
After having decreased the volume redundancy to 1, click Edit on the volume detail page again and add n13 as a new master node to the volume and click Apply:
Increase redundancy to 2
Now click Edit again and increase the redudancy to 2:
The state of the volume shows now as RECOVERING – don’t worry, it just means that mirrored segments are now created.
Enlarge the database
Now click on the database link on the EXASolution screen:
Select the Action Enlarge and click Submit:
Enter 1 and click Apply:
The database detail page looks like this now:
Technically, this is a 3+0 cluster now – but the third node doesn’t contain any data yet. If we look at the same table as before, we see that no rows are on the new node:
To change that, a REORGANIZE needs to be done. Either on the database layer, on schema layer or on table layer. Most easy to perform is REORGANIZE DATABASE:
Took me about 10 Minutes on my tiny database. That command re-distributes every table across all cluster nodes and can be time consuming with high data volume. While a table is reorganized, that table is locked against DML. You can monitor the ongoing reorganization by selecting from EXA_DBA_PROFILE_RUNNING in another session.
Let’s check the distribution of the previous table again:
As you can see above, now there are rows on the added node. Also EXAoperation confirms that the new node is not empty any more:
On a larger database, you would see that the volume usage of the nodes is less than before per node and every node is holding roughly the same amount of data. For failsafety, you could add another reserve node now.
Summary of steps
- Add a reserve node (if not yet existing)
- Take a backup on a remote archive volume
- Shutdown database
- Decrease volume redundancy to 1
- Add former reserve node as new master node to the volume
- Increase redundancy to 2
- Enlarge database by 1 active node
- Add another reserve node (optionally)