Have you been thinking about running your Oracle environment in the cloud? Is your organization in the midst of a technology refresh, or in the process of upgrading its Oracle databases and applications?
If the answer to any of these questions is yes, you may be happy to learn about NetApp Cloud Volumes for AWS, a recently released solution. With Cloud Volumes, scaling your Oracle databases in the cloud has never been easier.
To prove the point, let’s take a look at how Cloud Volumes works. I’ll start by outlining why you’d want to run an Oracle database in the cloud, then transition into a demonstration of NetApp Cloud Volumes’ main features.
Why Oracle on the AWS Cloud?
When embarking on a hardware refresh or a migration on-premises, you are usually constrained by a long procurement process, and few hardware vendors and system configurations. In some cases, it is not possible to get the loaner hardware necessary to evaluate the performance and migration paths ahead of time, thus limiting your decisions and best choices. I have seen companies select and purchase equipment at the start of a project (due to budget cycles), only to find the hardware obsolete by the time of the final rollout. An elastic cloud environment will allow your database to scale in line with your requirements and timeline—So you are only paying for the resources you use when you need them. Eliminate on-premises hardware support costs and reduce data center costs by moving your databases to the cloud. Reallocate the cost savings into staff training, new redundancy, durability and high-availability components to keep your business running.
When looking at migrating an Oracle database or an Oracle-based application to the cloud, there are important aspects that need to be considered from the beginning. Long-term business success will rely on key architectural decisions made early on, as well as in-house technical support teams and their ability to adapt. Performance, security, availability, reliability, and capacity are all aspects that can make or break the end-user experience. Fortunately, all of them can be evaluated and managed along the way in a scalable environment.
Know your requirements and SLAs
Any new database deployments or database migrations should have clear requirements and stated SLAs up front. Considerations such as response times, data throughput, peak cycles, planned vs. unplanned outages, and geographical access should be documented and known by the team. For existing databases and applications, capture a set of performance and capacity baselines ahead of time and use them to compare to the migrated environment. Many queries or operations running on-premises may not execute with the same expectations within the new configuration. These metrics will help you choose the right components for the new system and help you tune your database to meet (or in many cases exceed) your current performance.
Storage Options
One of the most important elements of an Oracle database is knowing how and where to securely, reliably and quickly store and retrieve your data. An Oracle database is comprised of binary files (executables, control files, etc.), database files (data) and redo log files (used for recovery). Using the proper storage for all types of files is crucial to keep your database running fast and at all times, regardless of the total size.
Application developers and database administrators strive to tune their queries to retrieve most of the data from the buffer cache. When not available in memory, data blocks must be retrieved from the storage subsystem. Accessing these blocks create latencies that can impact the overall performance of your system. Deploying the correct storage configuration will minimize these latencies and offer optimal performance. AWS provides the following native storage options for your database:
- EBS with General Purpose volumes (gp2) – These volumes provide a good balance between price and performance. You can use them for your boot volumes, application files and binaries. General Purpose volumes offer 3 IOPS/GB up to 10,000 IOPS. Use more than a single volume to unlock more bandwidth by spreading the load across all volumes.
- EBS with Provisioned IOs – AWS offers block stores (volumes) with a guaranteed IOPS performance (99.9%) based on the size of the volumes (for example, 50 IOPS/GB of storage allocated to a maximum of 32,000 IOPS per volume with single-digit millisecond latency). EBS should be used along with ASM (Oracle’s Automatic Storage Management) for storing database data and log files. Configure ASM to strip data across multiple EBS stores to achieve the highest bandwidth and scale the storage.
- HDD-backed volumes – These volumes are best used with large data sets (flat files) that require intense and sustained throughput. You can leverage them for your ETL jobs but they are not suitable for your database files and logs.
AWS updates its pricing regularly. Refer to the AWS website for current pricing.
NetApp Cloud Volumes
Until recently, EBS was the only AWS block storage option for your Oracle database files. Careful planning was necessary to achieve the optimal balance between read/write latency, overall data throughput, and costs. NetApp aims to abstract all the complexity by providing a single high-performance store type called Cloud Volumes.
As a database architect designing and provisioning Oracle databases, NetApp Cloud Volumes provide me with an efficient way to set up and scale such databases. The Cloud Volumes are much more than an NFS or SMB file system. This new cloud service offers high durability (99.999999%), encryption at rest, high-availability and high-performance access to your data, and makes it an ideal candidate for Oracle databases. Your volumes are available across all availability zones within a single AWS region at all times. Additionally, Cloud Volumes offer unique features such as point-in-time snapshot technology, allowing you to take secure backups in seconds with no impact to your running database. And managing the volumes is made easy through a management console online, as well as dedicated APIs for your DevOps and admin teams.
During the development, testing and support of Oracle-based applications, different teams frequently need a clone copy of the production database. NetApp leverages its powerful cloning technology and lets you create an unlimited number of clone databases instantly as needed. This simplifies the process for your database administrators, enabling them to respond quickly to the growing demand of your organization.
The secret to the high throughput achieved by NetApp Cloud Volumes relies on Oracle’s Direct NFS (dNFS) feature, allowing multiple network sessions to concurrently access the volumes. This provides a significant advantage over AWS’ native storage. Internal tests ran and published by NetApp’s technical team report close to 300,000 IOPS on 100% read workloads and 235,000 IOPS on a 75/25 mix of reads/writes. While your mileage may vary, the team was able to achieve a 16Gb/sec rate against a single volume. Note that one of the limiting factors to write performance is defined by AWS’ maximum VPC egress limit. While most reads are not constrained, you will experience write latencies beyond the AWS stated ceiling of 5Gbps should you reach it.
NetApp provides a three-tier pricing method for access to the Cloud Volumes that provides a balance between performance and capacity. Use different tiers based on your defined SLAs. The tiers are:
- Standard Up to 16MB of storage bandwidth per TB ($0.10/GB per month)
- Premium Up to 64MB of storage bandwidth per TB ($0.20/GB per month)
- Extreme Up to 128MB of storage bandwidth per TB ($0.30/GB per month)
Each single volume can be created from 1GB, and extended anytime up to 100TB quickly and easily. However, your database is not constrained by size and can grow beyond the limit chosen. Note that AWS will add standard charges for all egress traffic to the Cloud Volumes (database writes). Current prices vary between $0.01/GB and $0.02/GB depending on your region (for example, an average database writes 2MB/s (or 256 Oracle 8KB blocks/second). This additional cost would be roughly $0.72/hour or $17/day. Make sure to account for this overhead when estimating the initial load and ongoing database operations. Learn more with this great three-part primer by Chad Morgenstern (part 1, part 2, part 3).
If you are already using NetApp on-premises, consider using Oracle Data Guard or Oracle GoldenGate Replication for your upcoming migration. Using real-time replication will eliminate some challenges with migrating your data to the cloud. Additionally, consider replication using Cloud Volumes to consolidate various sources of data and provide a single point of reference for your data.
Get Started Today
Sign up for a new account on the NetApp website. (Note that access to NetApp Cloud Volumes is not currently automated, but will be made self-service in the near future.) Once you receive access to your account, log into the console to create new Cloud Volumes. Here are the main steps to get your database up and running quickly.
- Create a new Cloud Volume. It is recommended to create two volumes, one for your data files and one for your redo log files. This will allow you to take snapshots of your database and apply the separate redo logs to bring the database to the latest state. Choose a unique name for your volumes, define a volume path, specify the service level, adjust the size quota, and click Create Volume. Your Cloud Volume will be provisioned within a few seconds.
- All Cloud Volumes - You can review and manage all your Cloud Volumes directly in your console. A set of APIs is also available to automate certain tasks.
- Enable Oracle Direct NFS Client Control for your Oracle database. By default, the Direct NFS Client is disabled with any single-instance Oracle database installation. You need to enable this feature to use NetApp Cloud Volumes.
- Change your current directory to $ORACLE_HOME/rdbms/lib
- Enter the following command: make -f ins_rdbms.mk dnfs_on
- Mount the Volumes. Once your Volumes are created, the NetApp Cloud Manager will provide you with the mount instructions necessary.
- Create your database. Now that you have the storage allocated and available on your system, create your database as you would normally. Here are two screenshots showing the database files located on the Cloud Volumes created during my tests.
Wrapping Up
As demand for data and analytics grows exponentially, so do your database and storage needs. NetApp has made provisioning high-performance storage quick and seamless with Cloud Volumes. I have not yet deployed Cloud Volumes in a production setting, but I look forward to using this technology to help streamline my Oracle database workflow.