Product registrations have been around forever. Typically they provide extra benefits to the consumer such as an extended warranty or premium support. If you don’t register, you don’t get these benefits.
Last week I purchased a safe for my house. Having been burglarized before, I wanted a big, freakin’ heavy safe to hold my valuables. I ended up purchasing a SentrySafe model, which weighs a couple hundred pounds. After all, whats the point of having a safe if a burglar can just walk out with it in hand?
After moving the safe into the house I saw the product registration card. Seems innocent enough right?
The cards states that you should register for a number of benefits, including “for insurance purposes in case it’s lost of stolen”. Instead of mailing this card in, I decided to visit their online registration website.
After reading into it the details, I learned that unless I register my safe AND provide an itemized list of everything in my safe, then I am not covered by their insurance policy of $10-50,000.
A list of all my valuables entered into an online database? Seriously? NO THANKS
Consider this very real life scenario based on this process:
- Tens (hundreds?) of thousands of people register their SentrySafe Safes, providing the model, itemized list of valuables and their address (where the safe is likely located).
- All of this data is stored in a database, with unknown levels of security in place.
- Hackers breach the network and servers owned by SentrySafe. This is pretty common, just ask the companies with the 15 worst security breaches in history
- Using the data, hackers/burglars can create a map of everyone who owns a safe, it’s model and list of valuables.
- Burglars can target the models that are known to be easily opened, or target the largest models knowing they likely contain more valuables.
Pretty scary to think about. Think twice before registering your products.
This is where the OpenArk tools come to the rescue. The set of free python tools makes many time consuming DBA tasks easy as could be. I’m only going to cover the oak-online-alter-table function, but there is plenty of in-depth instruction for the other functions on the the OpenArk website.
Here is the simple line by line installation
Altering the old way
Altering with OpenArk
Live Status of the Process
This is one of the biggest benefits of using OpenArk – you get a live status report telling you exactly what percentage of the move is complete. Anyone who has done manual altering knows how difficult it is to have no sense of the progress.
How it works
The tool uses a concept called “Ghosting”. A second table with the new schema is created. The data is copied over to the new table and any modifications to the data which occurred during the copy process are then executed on the new table to make sure it’s up to date. Then the new table is then renamed to the original tables name, and the old table is then dropped. With all of the steps required it’s easy to see the value in a tool that takes care of everything.
I was recently contacted by Amazon Web Services to provide feedback on their relational database-as-a-service (RDS). I took the opportunity to outline my usage and suggestions for improvements. As there is so much “black magic” behind the scenes as AWS, I was really happy to receive an in-depth response from an executive there (shown below my response)
Overview of My RDS Usage
I manage the infrastructure for a consumer website offering games, trivia, surveys and other content. We have hundreds of thousands of users and rely on RDS for all our relational DB needs.
- Full stack running on EC2
- One master DB and one read-replica
- 100-200GB of data
- 350GB disk allotted (due to rumors on StackOverflow that 300gb+ drives have striping)
- Not utilizing Multi-AZ. Due to large cost increase, and not sure that I can trust it will work. Last major outage from that storm hitting VA showed that even. Multi-AZ doesn’t help if none of the infrastructure is available.
- Memcache (via Elasticache) utilized
- MongoDB used for unstructured data requiring heavy writes
Suggestions for Service Improvements and New Features
- Direct URL to Cloudwatch graphs with public access. This would allow me to distribute bookmarks to my developers and other staff without giving them access to AWS. There is no private data shown on the graphs, so it should be quite safe to do.
- Access to bin logs for forensic dives into issues. I.E. how was all data dropped from every table? We recently had an outage where our entire database was wiped. We had no log of queries, and therefore couldn’t find out if it was a bug in our code, or a failure with MySQL that corrupted all the data. I had to do a “restore to point in time”, which took 2.5 hours to complete. During this time I had no idea how long it would take to come back online, or if the database would just empty again.
- Feedback in the GUI or API as to how far along (%) a DB Restore is. Currently there is no information and I can’t ever give anyone an ETA on when service might return.
- When creating a database, could you explicitly tell me at what disk size I will get higher performance from striping. I read on StackOverflow that it is enabled on disks larger than 300GB.
- I have setup a DB using the new PiOPs environment and began testing the load time. It seems that even with the largest instances available and best practices, it will still mean at-least 5 hours of downtime for us. For this reason we’ll like have to wait to take advantage of it. Do you have any idea when the ability to boot standard snapshots into PiOPS will become available?
Response from AWS
October 3rd 2012
I’ve passed this doc to our development team and documented these requests into a requirements doc that we maintain for them. Some of the items are already on their radar, so your input will influence their priority. One point you raise (in both your configuration detail and in one question towards the end of the doc) is in regards to scaling storage:
You will realize improvements with RDS throughput by scaling storage as high as 500GB, and this effect starts at a level well under 100GB (ie: striping occurs at a far lower level than 300GB).
The most important factor in realizing this throughput potential is the instance class. Specifically, the following instance classes are considered High I/O instances:
These instances have large network bandwidth available to them, so the upgrade that you mentioned on stackoverflow (to the m2.2xlarge instance) was likely the main reason you saw a leap in throughput. If you stripe your current storage as high as 500GB, this will continue to increase. With provisioned IOPS support for RDS (PIOPS-announced last night), throughput will now scale linearly all the way to 1TB.
With PIOPS, the throughput rate you can expect is currently associated with the amount of allocated storage. For Oracle and MySQL databases, you will realize a very consistent 1,000 IOPS for each 100GB you allocate – resulting in a potential throughput max of 10K IOPS. The (current, temporary) downside is that you will need to unload/load data to migrate an existing app to the PIOPS RDS.
Loading snapshots into PIOPS instances is still a few months away, but the team is committed to delivering this as quickly as possible. We understand the downtime impact and recommend that PIOPS instances be used for testing, benchmarking and new workloads. Existing workloads that need PIOPS are mostly sensitive to downtime, so we don’t anticipate a lot of migration until we can provide a more seamless transition.
Regarding Multi-AZ deployment… we’re constantly improving the back-plane of RDS to ensure that MAZ is failure-proof. Until we’re at 100% protection, however, the work continues – to the point that it often pushes back more visible roadmap features.
My Thoughts on the AWS response
The AWS team is very sharp. The speed at which they iterate their products with customer demand is incredible. Their response to my concerns with the RDS product clearly demonstrate this.
RDS has been a huge success for me. Though there have been a couple periods of downtime due to EC2 apocalypse like events, the ability to focus on product development instead of mundane DB/sysadmin tasks is priceless. Even more important is the peace of mind I can have as a sysadmin. Typically database backup, storage, rotation, testing and recovery is an arduous process requiring constant attention. Giving up a couple control knobs for all this automation is absolutely the right decision for any startup.
I’m excited to see what the AWS team comes out with next.
Gotta love his attitude – Just builds awesome product after awesome product.
In the good ol’ days I would buy a cell phone and my rebates would come in the mail as a check. I could deposit it with ease and get every cent back at no cost. Nowadays I buy a new phone from Verizon Wireless and I’m congratulated on the $50 pre-loaded Visa card I’ll be receiving. Congratulations! What a racket. They take my money, frame it as a discount I’m getting, and make it clearly difficult to spend.
Getting the cash off the card is the priority for me. I’m willing to pay a small fee to have Verizon stripped of every cent it stole from me. This is where Stripe comes in. I setup a simple SSL checkout page on my website where I’m able to process payments from anyone. I send them a link and they can pay me instantly. It works gloriously.
Now I take my Verizon Rebate card and run it through the form.
Jared: $50 (-2.9% + $0.30)
Every once in a while I come across something that’s inexpensive and can seemingly fix anything. My lifetime favorites are classic trio of bungie cords, duct tape and rope. There’s a new must-have McGuiver tool to add to your toolbox: Sugru. It’s an all-purpose putty that can be molded into any shape and air dries over night.
My first test of Sugru was my favorite pair of in-ear headphones, Bose IE2′s. Since accidentally running them through the washer (and Dryer!) 3 or 4 times, the wires connecting to the buds were starting to become exposed. Electrical tape could be a temporary fix, but I knew it was a matter of time before the connection wore down, plus electrical tape looks kinda Ghetto Fabulous.
I used one small packet (size of a ketchup packet) of Sugru and rolled it around where the wire connecs to the buds. This took roughly 2 minutes, after which I laid out my headphones for the night so that the Sugru could cure. Here is what the finished product looks like.
For roughly $2 I managed to not only fix my headphones, but improve the wire support. Since Sugru is heat-resistant, I’d bet my headphones would do just fine going through the dryer many, many times.
Here are a few other ideas for using Sugru:
- Patching a hole in a old row boat
- Rounding off a sharp edge
- Push in a stripped screw hole and let dry. Put screw back in.
- Create a grip for your hockey stick, machete, gun, etc..
You can purchase Sugru online in a varieties of colors and sizes. Make sure to add some to your toolbox.
My alma mater, UMass Amherst, was completely embarrassed by some witty hackers who managed to modify their main website’s meta tags.
I wouldn’t be suprised if one of their “sysadmins” (students paid too close to $7.25/hr) were to blame. We all know what happens when you don’t fairly compensate those who maintain the systems you rely on. Right? Right.