We use cookies to personalise content & improve our services. By using our site, you consent to our Cookie Policy. Read more
When the available IT infrastructure options are known, the requirements for IT can be reviewed. Below is a list of seven common criteria that should be considered when making decisions concerning the IT infrastructure.
1. LATENCY
Latency, that is, the data transfer delay between the application and server is relevant to the smoothness of service use. In particular, when an application creates a lot of small data transfer packets (e.g. Client Server applications) the recurring long latency times of packets can slow the operation down to a disturbing extent.
When latency needs to be minimized, the servers must be located as close to the software users as possible. In this case, the best options are either the company’s own data centre or, in some cases, a public cloud service. For example, AWS is a good alternative for many Finnish companies, thanks to the data centre opened in Stockholm.
2. FAULT TOLERANCE AND RECOVERY
The applications used by the company should be categorised based on criticality: what is the business impact if the application is down? Attention should be paid to fault tolerance and recovery of applications, especially for critical applications, such as those having an effect on the functioning of a production environment.
For example, fault tolerance can be improved by local duplication of the service platform, that is, by running the application physically on two different servers within the same physical location. In this case, when one server fails, the application continues to operate on the other server and users experience no breaks in the application use.
If higher fault tolerance is required, several physical locations can be used to create redundancy. In this case, losing one physical location does not endanger the operations of the application itself. However, this is usually the most expensive option if you want to duplicate all the components along the way and data is actively mirrored between the locations. This is why you should use this method only for the most critical applications.
The third option is so-called disaster recovery, that is, data replication/verification to another location. In this case, it may take longer to get the service up and running but there will be no data loss.
Public cloud provides the best possibilities for building fault-tolerant environments. Alternatively, you can build your own data centres, connections, redundancy and the related control over multiple locations, but this is not always the most cost-effective option.
3. AGILITY
The agility of IT development can be a major competitive asset: the model of continuous development will, over time, provide businesses with better services and enable faster error correction.
Agility can be realised when the infrastructure (network, firewalls, capacity, storage) can be managed through code and the platform is built using microservice architecture and containerisation. In this case, developers do not have to wait for capacity and install the required applications, but the complete development environment is available in a few seconds.
If agility is important, public and private cloud are the best options for you.
4. DATA LOCATION
Having data located in Finland or, for example, within the EU can sometimes be of importance. The reasons for this may be related to the regulation that is applied to the sector or to the company's business preferences.
These days, public cloud service providers also have operations within the EU and documented processes for data processing, so there are no problems in terms of the GDPR requirements.
5. INFORMATION SECURITY
Security threats are constantly evolving, and thus continuous development work is required to provide protection against the attacks and to prevent them. Contrary to the general belief, public cloud may be a more secure option than a company’s own data centre.
The explanation is simple: public cloud providers cannot afford to blunder, and therefore they need to have the necessary resources, competence, deputy arrangements, 24/7 operations, business continuity plans, data security and regulatory certificates and security and data security arrangements in place.
The infrastructure of public cloud is code-controlled and automated, which means that data security is taken care of as a routine task with no manual work. However, one should note that the data security of a cloud service is typically divided between the supplier and purchaser. The supplier is responsible for the physical data security of the data centre, the data centre environment, the monitoring and maintenance of equipment platforms and the related updates.
As for the customers, they are responsible for all planning, deployment, maintenance, and updating tasks related to their own environment. If you don't have in-house competence or sufficient resources, managed cloud services are a carefree option.
Read more about managed cloud services (in Finnish).
6. SCALABILITY
Do the capacity requirements of your company vary? There might be unexpected spikes in the requirements or a steady increase following the company’s growth. If the need for higher capacity is temporary or it is difficult to predict the growth accurately, it is a good idea to choose a cloud service where the capacity is always suitable and billing based on the actual use.
7. PLANS FOR THE FUTURE
The transition to public cloud services is easy, but hasty decisions may create a costly situation involving a supplier lock-in, that is, it may be impossible to transfer or provision the service to some other service platform.
Supplier lock-ins can be avoided by using microservice architecture for service creation and by otherwise taking a long-term approach to the building of infrastructure. The implementation for the Finnish Fair Corporation is a good example of smart architecture (in Finnish).
Read about our IT infrastructure services (in Finnish).
Author:
Jani Meuronen