We all know the importance of people working in a company, but it seems that many forget the importance of the data generated. In this post, we will not talk about people but about data, that great forgotten until we miss some …

Studies indicate the following:

  • 30% of companies that suffer a fire and lose all their data close in less than a year and 70% in less than 5 years (Home Office Computing Magazine)
  • 31% of computer users have lost files at some time in our lives due to causes beyond our control.
  • At least 34% of the companies that make backup copies do not check their integrity and 77% have found failures in them.
  • 60% of companies that lose all their data will close in less than 6 months.

Large companies have all (or the vast majority) disaster plans, but what about the Spanish SME? Are we prepared to suffer at any time a total loss of data ?. It only takes a small spark that burns our server room or some thief that in just a couple of minutes takes our data server … or better yet, that someone in our company open an email with the Cryptolocker virus and all our database that infected in seconds … the threat is not rare, it is more common than we think.

Are you prepared for a disaster this midnight? If you think you are not, or are not sure, continue reading this post.

1  Set your needs

Before you start making backups, stop and think about what you really need. There are two concepts that you have to take into account:

  • Business Continuity (or Business Continuity, BC)
  • Disaster Recovery (Disaster Recovery, DR)

Both concepts depend directly on the Recovery Time Objective (RTO), or what is the same, the maximum time we can accept for the age of the files. Can we recover with backups from 7 days ago? 3 days? 12 hours? hours? It is clear that not all the information we generate will have the same Recovery Objective Time and that is why we must make this time clear for each type of file. Accounting may need an RTO of 0, but other things may be delayed to 24 hours or more. Keep in mind that an RTO = 0 is also much more expensive to implement.

 Evaluation of recovery strategies

There are many ways to make sure that our data is available and correct, but which one to use depends on our needs. The main techniques are:

  • Replication – Replication involves creating copies of our data automatically on a second server. The copies may be in real-time or scheduled and may or may not be able to retrieve a history of any file. Real-time copies are usually only worked on local networks since the real-time transmission of our files to the internet is very limited by the upload speed that we have contracted with our internet provider.
  • Application-based replication – Data can be replicated from applications that use it. A clear example of this would be a program that stores your data in real-time in two different locations, such as a RAID 1 (mirror) that we would do between hard drives.
  • Hypervisor Backup – virtualization systems such as VMware vSphere or Microsoft Hyper-V have backup systems for advanced virtualized computers that have hardly any negative impact on performance on computers.
  • Backups to external disks or tapes – this is the traditional system used by the vast majority of SMEs, the backup at a given time to an external device. It is very unadvisable because it usually requires the execution by some person, it is not done automatically so that in many cases we forget to do them.
  • Snapshots – Although we cannot consider it a backup system since our data is not transferred to a second device, it is very convenient and efficient to be able to return to an earlier version of something, thus avoiding loss of data in case of a minor problem. (This system would be useless before a fire, theft, …)

Opting for a technique does not give us the security of being able to always recover from any disaster, so we recommend the use of several of these methods to capture the best of each of them. For example, real-time replication is very useful since in case of theft of our backup server we will have a second with all the information, but perhaps before a virus like the CriptoLocker, the snapshots are better since we can return to an Easily previous version of a file …


The implementation of what has been described above can be very obvious, but we have to take many factors into account before starting work. In our company, there must be at least one person responsible for the security of our data. The person responsible for backup copies will be responsible for copying times and checking their consistency.

The copies of data must be functional and this means that the person responsible for the backups verifies that the copies are correct and that the information is correct to be able to work. A common error of this type is backup copies of files opened by some users. For example, if we make a backup copy of a file that is being used by someone on our network, it is quite likely that even if the copy is made correctly when we try to open the file from our copy, it gives us a consistency error. Recall at this time what was mentioned at the beginning of the post, that only 34% of companies verify that their backups are 100% functional.

Continuing with the person responsible for data recovery, it must also have the responsibility of knowing the order of data recovery. In companies, we have hot data and cold data, data that we need at all times and data that we only consult from time to time. It is important to have a recovery manual that allows us a business continuity with minimum time.

Data protection is not something we can do today and forget for life. There are rules that we must have to maintain correct data protection. It is very important, for example, to create a rule so that, for example, every time a new computer or server is installed in our company, a process of recovering the data from that new computer is generated.


Technology changes and our needs too. Applications over time change your needs to more or less. In addition, what we can think of today as impossible, maybe tomorrow is a reality, the cloud evolves very quickly and the upload and download speeds increase daily. It is important that the person in charge of data security audit the systems at least once a year.

Finally, we must provide our employees with the necessary tools and rights to recover their own data in the event of minor losses such as recovering an old version of a file and thus avoiding being overloaded with the person responsible for daily tasks.

This whole approach could be taken with one or several NAS Servers.