Cloud News

    When designing a distributed software architecture, it is important to define how services exchange information. For example, the use of asynchronous communication decouples components and simplifies scaling, reducing the impact of changes and making it easier to release new features. The two most common forms of asynchronous service-to-service communication are message queues and publish/subscribe messaging: With […] [Read More]

    Tipped off by a colleague in Denmark, I bought the LEGO Star Wars Stormtrooper Helmet, which turned out to be a Prime Day best-seller! As I like to do every year, I would like to share a few of the many ways that AWS helped to make Prime Day a reality for our customers. Back […] [Read More]

    It took me a while to figure out what observability was all about. A year or two I asked around and my colleagues told me that I needed to follow Charity Majors and to read her blog (done, and done). Just this week, Charity tweeted: Kislay’s tweet led to his blog post, Observing is not […] [Read More]

    AWS Lambda is a serverless computing service that lets you run code without provisioning or managing servers. You simply upload your code and Lambda does all the work to execute and scale your code for high availability. Many AWS customers today use this serverless computing platform to significantly improve their productivity while developing and operating […] [Read More]

    I recently wrote a post to announce the availability of M6g, R6g and C6g families of instances on Amazon Elastic Compute Cloud (EC2). These instances offer better cost-performance ratio than their x86 counterparts. They are based on AWS-designed AWS Graviton2 processors, utilizing 64-bit Arm Neoverse N1 cores. Starting today, you can also benefit from better […] [Read More]

You are going to lose your data one day or another!

It gets easier and easier to create and keep data, but it is always easy to lose data. In fact: some historians believe that this current digital age will make it more difficult to conduct research and find documents because destruction or data loss is so widespread.

When you use a traditional hosting company (not cloud): there is a 100% chance that the disk will crash one day or another! It is just a matter of time. As time goes by, your disk is getting closer and closer to their MTBF (Mean Time Before Failure) measured in hours. SSDs drives (including NVMe, M.2) have a higher lifetime than mechanical HDD drives but they use another metric: Terabytes Written (TBW), which measures the maximum amount of data that can be written before the drive fails completely.

So regardless of the drive type:

Your disks will fail!

Using RAID does not change anything, as this technician in a datacenter discovered:

Despite using RAID 10: two disks in the array failed within a very short period time before they could get replaced, rendering the entire four disks useless! Very rare!

Said a happy technician to have something exciting to share

The customers were very happy to be the casualty of this rare event, having lost data, and resulting in a 36 hours downtime:

I have lost one week of emails and work, because the newest backup available was 1 week old. The restore of the server was slow because the hosting company did not put SSDs but re-installed normal and cheap HDDs.

Said a clueless customer

There is a reason why Cloud Computing is superior:

  • drives do not fail because the industry’s metric is: data durability
  • you can switch between SSD and HDD without downtime
  • backups can be put in place very easily
  • backups are cheap (sometimes less than $/€/£15 per YEAR)

If you run any website: contact us for a cheap backup solution?


So empty here ... leave a comment!

Leave a Reply

Your email address will not be published. Required fields are marked *