Docker containers, Kubernetes and cloud-native infrastructure all present significant opportunities to add efficiencies to the way we build and deploy applications. However, with the advantages of these technologies comes additional complexity. In some cases, IT teams may hesitate to take advantage of next-generation technologies because they worry that they’ll be too hard to deploy, manage and scale.
Fortunately, it doesn’t have to be that way. While container infrastructure is certainly challenging to set up and manage if you choose to design and build it yourself, there are solutions that significantly streamline the process.
Diamanti lets you deploy containerized applications in your environment faster, easier and cheaper, with minimal operational risk and 24/7 first-class enterprise support.
To provide a glimpse of the Diamanti Bare Metal Container Platform in action, this article walks through using Diamanti to deploy a containerized database using Kubernetes. We’ll start by discussing why this is a challenging task using traditional approaches. We will then walk through some examples of using the Diamanti Bare Metal Container Platform to perform this process in simple steps. Finally, we’ll discuss how the Diamanti approach for accomplishing this task compares to other solutions, and what makes Diamanti stand out.
Current Challenges
Let’s say that we are a business, and we are trying to modernize our existing infrastructure using the latest technologies. We decide to use Kubernetes for container orchestration, and our first task is to deploy containerized databases in production. We have no prior knowledge of Kubernetes for container orchestration.
Before taking any action, we have to go through a series of time-consuming steps and evaluations. It’s necessary to understand how the basic building blocks fit together and how to design systems that leverage the full capabilities of the platform before we can combine them together to form interconnected services. This process, as depicted below, can take weeks or even months of installation, integration and testing.
Figure 1: Traditional way of building a container environment
But with the Diamanti platform, we have a more streamlined process that is better suited for fast-evolving businesses that want to break through the market with minimal operational effort. The overview of the process (as depicted below) is reduced to the absolute minimum steps required: Engage Diamanti and see what they have to offer, Install it, Run applications using their UI Dashboard or their CLI tool, and Breathe Easy as you can focus on satisfying your business needs and giving value to your customers.
Figure 2: Streamlined process as provided by Diamanti platform
Now, let’s see a step-by step-demonstration on how we can deploy a containerized database in less than 15 minutes using the Diamanti D10 bare metal container platform and a variety of approaches.
Deploying Containerized Databases with the Diamanti Bare-Metal Container Platform
Using the UI Dashboard
Diamanti offers a complete and easy-to-use UI dashboard that allows for quick experimentation with features and shows useful metrics and health reports for later analysis. We can use this dashboard to perform all critical operations in a secure way. Let’s deploy our first containerized database and see how easy it is to use. Using the UI, we are going to deploy a Redis server.
- Navigate to the IP address of the Diamanti cluster. You will be asked for a username and a password.
- Once you log in, you will be redirected to the main dashboard where you can see the overview of the cluster.
- We are interested in creating a new deployment, so we’ll click on the Create button. We should see the following screen where we can create our first deployment.
- First, we select the Deployment Type as we are interested in deploying a container to the cluster. We need to provide an App Name, so we enter redis. Next, we need to specify the initial instances so we enter 3 to create three instances. We can choose a namespace as default, and we can also define Labels. For now let’s add one for role type cache. Then, click Next.
- On the next tab, we can specify if the container is part of any networks. This is useful for creating custom network topologies or VPC networks that offer network segregations that improve security. For now, we don’t want to define a new network, so we’ll click Next.
- In the next section, we can define the storage requirements of the container by creating new storage claims. We currently want to have the container used only for memory storage and no persistence, so we’ll click Next.
- Finally, we are ready to define the container details. We need to give it a name; we also need to specify the Image, so we enter redis to match the Redis image from the official Docker Hub. We specify a list of open ports. We assign one for 6379 as it’s the default for Redis. We can also define some usage limits for CPU and Memory. Now click Deploy and click on the Events Icon on the bottom right corner. You will see the list of deployment events for Redis.
- To verify that the deployment was successful, click on the Application-> Deployments tab and review the Redis deployment details in the list.
- Verify that you can connect to the Redis server. Open your command line and connect to the Redis server using the IP obtained from the application deployment details.
➜ redis-cli -h 172.16.223.9
172.16.223.9:6379>
- Now that we can connect to one of the instances, we can also scale them down or up depending on our needs. We can use the button from the list as shown in the following clip.
That’s it. As you can see, using the UI, it’s very easy to deploy any sort of container in the cluster in a few simple steps.
Using the CLI Tools
The command line tools are suitable for everyday operations and for better automation of infrastructure deployments. They offer a more complete set of operations in comparison with the UI. Let’s try and deploy a MariaDB server in our cluster.
- First, download the tools to manage the cluster using the Download button from the UI dashboard.
- Navigate to the folder that you downloaded the tools from. You will need to use the dctl tool to register the cluster config first and log in to the cluster (similar steps are required if you were to use the kubectl tool).
➜ dctl config set-cluster autotb5 --api-version v1
--virtual-ip 172.16.220.101 --server 172.16.220.101 --insecure
Configuration written successfully
➜ dctl login --username <username> --password <password>
- Once you log in, you can explore the dctl tool options. For example, you can show the list of networks that correspond to the same information from the dashboard.
➜ dctl network list
NAME TYPE
START ADDRESS TOTAL USED GATEWAY
VLAN NETWORK-GROUP ZONE
blue public 172.16.224.11 240 3
172.16.224.1 224
default (default) public 172.16.223.4 250 8
172.16.223.1 223
ingress public 172.16.224.5 6 3
172.16.224.1 224
- Download the helm package files from the MariaDB section here: https://github.com/helm/charts/tree/master/stable/mariadb
- Navigate to the folder where you downloaded the files and use the helm tool to install the MariaDB chart. Give it the name my-mariadb.
➜ helm install --name my-mariadb .
NAME: my-mariadb
LAST DEPLOYED: Sun
Dec 30 21:00:33 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME TYPE DATA AGE
my-mariadb-mariadb Opaque 2 1s
...
- Now you can use the helm list command to see the list of deployments in the cluster.
➜ helm list
NAME REVISION UPDATED STATUS CHART
NAMESPACE
haproxy-lb 1 Thu Dec 20 17:13:07 2018 DEPLOYED
haproxy-ingress-2.0.2 ingress-controller
mssql 1 Fri Dec 21 00:10:23 2018 DEPLOYED
mssql-linux-0.6.5 mssql
mssql2 1 Fri Dec 21 00:11:00 2018 DEPLOYED
mssql-linux-0.6.5 mssql
mssql3 1 Fri Dec 21 00:12:37 2018 DEPLOYED
mssql-linux-0.6.5 mssql2
my-mariadb 1 Sun Dec 30 21:00:33 2018 DEPLOYED
mariadb-5.2.5 default
- You can also verify with the dctl event list command that the deployment was successful.
➜ dctl event list | grep my-mariadb
2m 2m 1 INFO default
my-mariadb-mariadb
f307e54f-0c75-11e9-a37d-a4bf01147eba StatefulSet
statefulset-controller
SuccessfulCreate create Pod my-mariadb-mariadb-0 in StatefulSet
my-mariadb-mariadb successful
2m 2m 1 INFO default
my-mariadb-mariadb
f307e54f-0c75-11e9-a37d-a4bf01147eba StatefulSet
statefulset-controller
SuccessfulCreate create Claim data-my-mariadb-mariadb-0 Pod
my-mariadb-mariadb-0 in StatefulSet my-mariadb-mariadb success
...
- Use the provided IP address to test that the server is up and running.
➜ mysql --host=172.16.223.9 --port=3306 -u <user> -p <pass>
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.12
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All
rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.
mysql>
That’s it. You have successfully deployed a containerized database using Diamanti.
Why does it matter?
In the above tutorial, we saw how easy it is to deploy a containerized database using the Diamanti platform. But you may be wondering why it’s worth investing in this technology.
Generally, when we compare two technologies, we focus primarily on their distinct differences so that anyone who tries to test them can get a strong picture. In terms of deploying existing applications, if you’re deciding between using a certified Kubernetes distribution on cloud VMs vs. a bare metal containerized platform like that from Diamanti, the following key factors indicate the latter is more suitable for running production containers with high performance requirements at scale.
- Tooling: Both platforms offer a UI that can be used to manage a container infrastructure. However, in the case of the default Kubernetes UI, there are still a few critical UI components that are missing, like RBAC controls (there is an open issue) or SNMP monitoring. Current enterprise customers will be looking to use these, and the Diamanti platform offers them by default.
- Performance: By design, bare metal providers offer better resource consolidation and a better foundation to run a full Kubernetes cluster. A lesser footprint means a bigger performance boost compared to VMs. The developer experience is even better, as we can deploy apps in less than 15 minutes, thus improving the feedback loop.
- Networking: Adding a networking layer to Kubernetes is a necessary but tricky step to accomplish. There are open source solutions that help handle the dirty details, but with a lot of configuration steps. Plus, this sort of software-defined networking has its own obvious drawbacks such as increased overhead and configuration roadblocks. Instead, with Diamanti, we don’t have those issues, as the containers are plugged directly into existing VLANs and DNSes, and each one of them is assigned an IP address. It achieves this via a combination of standard plus specialized hardware and it removes any unnecessary hops between the container and the OS. The result is native performance and a simplified networking topology.
- Storage: According to the article "State of Kubernetes Ecosystem," storage is the third top challenge that practitioners and enterprises face. The key is to offer a nice API to use persistent data and not go through the process of creating one from scratch every time we deploy a containerized application or database. Scaling up this layer for future needs should be equally easy. This is not obvious or trivial to do in Kubernetes, as in the current state, most external storage solutions do not work well with cloud-scale environments. This is not an issue with Diamanti Platform as the persistent layer is fitted with modern NVMe block storage and with a simple API to access it. This ticks all the boxes of low latency, flexibility and ease of use that are required for modern enterprise applications.
- Quality of Service (QoS): On top of the segregated duties of Networking and Storage, Diamanti offers more fine-grained control over the Quality of Service (QoS) that each of those components delivers. This is important because balancing the workload of critical services and applications cuts the risk of resource starvation. For example, it makes it possible to control the allocated bandwidth levels on each Pod, container or volume and avoid any bottlenecks that may occur. This offers a guaranteed QoS that allows multi-million IOPS, which is far higher than any competitive solution out there.
- Support: Diamanti offers 24/7 top-to-bottom support on its platform. Currently, there is also a list of partners in the Kubernetes ecosystem that offer similar support with varying degrees of quality.
Conclusion
In this article, we described the current challenges of deploying some containerized databases, and we walked through some easy-to-follow examples of deploying them with Diamanti container platform.
In closing, let me emphasize that I wrote this article as someone who has limited working knowledge of Kubernetes and its ecosystem. I’ve used Kubernetes a bit, but I’m no Kubernetes expert; instead, most of my work is with apps, not infrastructure. Nonetheless, Diamanti bare metal container platform allowed me to carry out my containerized database deployment easily, in a mere 15 minutes. By extension, it made it possible for me to focus on my application, and stop worrying about the underlying infrastructure.
If you want to see how Diamanti can help your business scale in the cloud faster, cheaper and easier, then Schedule a demo now.