Quantcast
Channel: Unpunctual Programmer » Devops
Viewing all articles
Browse latest Browse all 8

HA MySQL

$
0
0

With the reliance most web apps place on databases making sure they are always available is important for improving your reliability. I consider MySQL to be finicky (a manual master to slave fail-over at 5am is not my idea of fun), but it is what I’m stuck with. Here is my understanding of the options for keeping your app up when mysql isn’t.

Master – Slave

This is a pretty basic and common pattern. You have one server that gets all of the writes (the master) and another server that is replicating those writes (the slave). The replication is asynchronous and; therefore, can fall behind. Generally, if you keep the load under control on the slave it should keep up. You can send all of your traffic to the master or you can send reads to the slave distributing your read performance. Writing to the slave will probably break replication (definitely break it if you insert into an auto incrementing column).

Cleaning up a broken slave can be difficult. You either need to hunt down and undo any changes made to the slave or pull a dump from the master and import it. Care must be taken when importing to make sure that you know what position in the binlog to resume replication from.

If the master fails you can fall over to the slave though this is a manual process. You’ll need to stop anything being written to the master (if it isn’t completely dead), stop the slave process on the slave, tell your application to write to the slave, get the master back up and running, get the master up to date (you’re probably not going to know the binlog position which means you’ll likely need to do a full import), get what was the master set up to be the slave, and start it replicating.

The benefit here is that it’s easy to set up, mysql is pretty stable so you’ll rarely have to fix it, and as long as 1 server can give you enough write throughput, you can be reasonably happy. If you ever need more read throughput you can add additional slaves. The replication overhead on the master is low.

Master – Master

Similar to master-slave but now both servers are configured to be slave and master for each other. This allows you to read and write to either server because it will be replicated the other. Of course all of the same problems regarding repairing broken slaves and known binlog positions, etc still apply. Both servers can’t attempt to get the same id from an auto increment so you’ll need to do something in the lines of configuring one server to only use even numbers and the other to only user odd. If servers come under load they might start to fall behind in replication. The order updates and inserts are applied in might be different for each server which might lead to them having different data.

One way to resolve some of these problems is to only send traffic to one server.

With a VIP

If you are running on linux then you can use a virtual IP. This requires a bit of network wizardry. What you end up with is a system where the passive server (the not in use server) is polling the active server (the in user server) to make sure it’s alive. If it discovers that the active server is down then it steals the VIP and in doing so promotes itself to being active. Your application won’t need to know about it as the ip it’s connecting to never changes, just the destination. You’ll still need to figure out what was wrong with the broken server and get it working again but in theory they can be no downtime to the user.

With a distributed file system

The idea here is that the file system the servers write to is shared between them (NAT or SAN or the like). This is really more an active-passive solution than master-master as one of you’re servers will need to be turned off or you risk corrupting data. You can also combine this with the VIP method though you’ll need something to mount the drive (depending on sharing method) and start mysql. What you get with this is the knowledge that the data on the active and passive node will be the same (it’s the same mount) at the expensive of a little downtime while the passive mysql starts.

NDB

This is mysql’s cluster. It’s it’s own engine so you can’t use innodb or myisam, it has many moving parts, and it requires at least 3 servers, but it gives you a system where you can read from and write to any node without any of the data integrity complications inherent in the previous patterns. The system is composed of API nodes (generally mysql), data nodes, and master nodes. These processes can live on distinct machines or all on the same machine. Unlike the previous examples, all of your data does not live on all of your servers but is distributed across the cluster. This has the benefit of increasing your throughput as the number of nodes increases though individual query performance can be impacted.

Data can be mirrored between the storage nodes meaning the loss of any individual node will not result in the loss of data. Nodes can be inserted and dropped without fuss or harm. For example, NDB updates itself through a rolling update were each node, one at a time, is dropped out of the pool, updated, and entered back into the pool.

You will probably need to make some application changes in order to use NDB. One set of concerns relates to security as NDB is innately insecure, requiring proper use of DMZs. Please take a look at mysqls documentation for more information. Large join, sort, etc performance can also be bad as the rows involved will likely be spread across the data nodes.

Galera

Galera is a solution for MariaDB or Percona, which are forks of mysql. Galera is also a clustered solution that replaces mysql’s asynchronous replication with synchronous replication. Galera combines recent research and Percona’s XtraDB fork of innodb (myisam is in beta I believe but isn’t production ready) to provide solid performance for synchronous replication. As with NDB, Galera allows you to read and write to any node, and add or remove nodes with ease. When you add a node the cluster will automatically get it synced with the rest of the cluster.

Unlike NDB, all data lives on all nodes. This has benefits and drawbacks for performance. Read performance is fast and joins, sorts, etc are fast as everything is on the node getting the request. Inserts and updates speed will be depending on the speed of the slowest node in your cluster. This is important to consider given that you will likely run a Galera cluster on commodity hardware. You can find benchmark data at this webpage.

I favor this solution but suspect that it isn’t suitable for solutions that require a lot of mysql servers to meet throughput demands. Using NDB with a caching layer to speed up frequent reads might be a better solution in that scenario.

Tungsten

Like Galera, Tungsten is a cluster solution that replaces mysql’s built in replication. It allows for complex replication strategies and replication between different versions of mysql or different databases altogether. Replication happens by Tungsten Replicator which is a separate process from mysql, so the solution is not as simple as Galera but probably makes up for it in it’s flexibility.

RDS

Amazon’s RDS (relational database service) is part of it’s AWS offerings. You define what type of throughput you need and they handle the rest. The only drawback I know of here is that they don’t yet support encryption at rest so if you have PHI or other data you need to encrypt you are SOL. If you are in AWS and don’t require encryption at rest this is probably the right place to start.



Viewing all articles
Browse latest Browse all 8

Trending Articles