![]() Even worse if you do roll it back, you may be sending two SMS messages. You can rollback a transaction, but you can't un-send that SMS. You read a message queue table sending a notification via an SMS gateway. ![]() These are just big items that I'd expect a senior and even medior to understand, but there's more specific things like how ones handles transactions and designing an application that can pick up where a broken connection left off, in scenarios where a transaction isn't feasible, or even counter-productive.įor example: You use master-slave replication where the master is only used for writes. I believe even today apt-get install mysql under Ubuntu creates an installation that allows non-SSL traffic (though I believe these days it only listens to local host by default). ![]() Listening in on them if you controlled the network in-between application and server was trivial. For the longest time the default behaviour for a lot of MySQL servers was not to use SSL. Over long distances, this can make the latency issues even worse, as both side now need to traverse via their private gateway.Īlternatively, not doing so means network hops on open channels and more opportunities for bad actors to attempt some type of exploit, listening in on your database traffic, etc. There are also security considerations your database should be shielded from outside actors, sharing a private network with your application. As redundant as your database setup is, a dropped connection mid-way is not trivial to design around and recover from beyond "oops a problem has occurred, please try again". More hops in the connection means more points where a route can fail and your database connection drops unexpectedly. There are other concerns, such as stability. This is the basis for enterprise applications that need to span the globe. But for the purpose of this example, it illustrates why it's a bad idea.īetter? Slave replication to a closer server so at least all reads happen locally.īest? An application infrastructure that serves up locally relevant data from locally relevant servers, preferably with geo redundancy and such. You can mitigate the impact of this by having your backend build a cache from the SQL, and serving this up from a closer server. ![]() 2 seconds response time is already "poor" by modern standards, and all of that is JUST database queries, nothing else. This absolutely destroys your performance. And we're skipping over the SSL handshake, initial connection, schema information exchange. This is on the low side, but already you are looking at a total of 2 seconds to run a small set of queries. Let's say an average call for your application takes 10 queries. This means that a single query takes 100ms to hit the server, be processed and then take another 100ms come back with a result. There are exceptions to this, such as multi-threaded applications with multiple connections to your database, or batch queries, but in my experience the bulk of database logic is sequential.įor simplicity (and it's usually not even this simple), let's say your database is 200ms round-trip away from your application. Most applications run the majority of their queries sequentially, and need to wait for the response of one query before running the next. For one, latency will kill the speed of your application.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |