搜索
首页 / 公司新闻 / European cloud computing giant OVH data center on fire! Why Web 3.0 needs IPFs!
返回

European cloud computing giant OVH data center on fire! Why Web 3.0 needs IPFs!

浏览次数:120 分类:公司新闻

It is reported that a serious fire broke out in the computer room of European cloud computing giant OVH in Strasbourg, France recently. There are four data centers in this area. The sbg2 data center that caught fire was completely burnt down, and the building of another data center sbg1 was partially damaged. For security reasons, all OVH data centers in Strasbourg were temporarily closed.

OVH currently has 27 data centers in Europe, North America and Asia. In addition to AWS, Microsoft azure and Google cloud, OVH is one of the largest web hosting service providers in the world, and even regarded as the hope of European hosting service industry by many people.

fire

Data center building before fire

Oktave klaba, founder and chairman of the board of directors, updated the situation of the fire in real time on Twitter and suggested that customers start disaster recovery plans. He said that the server will be urgently repaired in the next 1-2 weeks, and the full recovery time is yet to be determined.

Disadvantages of centralized storage

In the event of big data center fire, even if the most comprehensive disaster recovery plan is launched in the shortest time, it may not be able to successfully recover all the data. In other words, to some extent, the traditional big data center fire not only has to bear the huge loss of hardware costs, but also the most valuable data may be lost forever.

OVH incident

Take the OVH incident as an example, the fire has had a serious impact on many websites in Europe. According to NetCraft, as many as 3.6 million websites across 464000 domains are offline.

For example, rust, which sends game data to OVH data center, according to its official twitter, 25 servers have been burned down, and all rust’s data has been lost in the fire. Even after the data center goes online again, no data can be recovered.

Customers affected by the fire include the European Space Agency’s data and information access service Onda project, which is responsible for hosting geospatial data for users and building applications in the cloud. OVH is involved in providing cloud infrastructure and delivering 10 Pb of unstructured data from the Copernicus earth observation project to developers through the public cloud.

single point

The project manager said that all services “were temporarily suspended… Following a major fire at the OVH cloud infrastructure in Strasbourg this morning.”

In fact, the traditional data center stores all the data in one place, which makes the data center very dependent on a backbone node, making the anti risk ability of the network very low.

Once the “single point” center is destroyed, such as fire, flood, earthquake, volcano and other natural disasters, the whole network will be paralyzed, which means that the user’s information security and privacy are vulnerable to threats, and the user’s precious data will not be saved.

The traditional big data center has the characteristics of fragility and weak anti risk ability mentioned above, but it can not meet the era demand of data explosion.

Taking NASA as an example, NASA will have 247 Pb of data processing capacity by 2025, and need to pay 5.439 million US dollars per month to AWS (Amazon cloud), that is 65.13 million US dollars per year for cloud storage.

Of course, in addition to the original transaction of 65 million US dollars, NASA has to pay AWS about 30 million US dollars for new cloud services every year, and it has to bear the cost of users downloading data. That is to say, every time users download and transmit data, it will increase the cost to NASA, and the cost in this respect is completely uncontrollable.

Therefore, there is no doubt that NASA and many other enterprises are looking for more suitable data storage solutions.

Distributed storage

Why do traditional big data centers always have such or such problems?

Because in essence, the current Internet is based on the HTTP protocol, which is based on the mechanism of centralized operation of the backbone network. The HTTP protocol is a centralized protocol. With more and more data carried by the data center built by HTTP, more and more users need to watch and transmit, the network will be like a “peak traffic jam”, and the response will be slow.

At this time, people hope to have a better sense of experience. Therefore, in order to meet the low latency access needs of massive users from all over the world, cloud storage providers deploy servers in various backbone network city nodes and build server clusters.

data storage

Although the construction of large-scale cluster solves the timeliness problem of user access, it also makes its anti risk ability lower. After all, once there is a fire, earthquake, tsunami, there is no escape, and even backup measures are difficult to take quickly.

Therefore, to really solve the problem of data storage, it is absolutely no longer as simple as building several large data centers, but need to find new underlying solutions of data storage.

IPFs may be the most needed underlying solution for data storage in the whole era . In the face of natural disasters:

The fault tolerance mechanism in IPFs will ensure that your data is copied enough and stored in different areas. Even if the data in one area is completely destroyed by natural disasters, your data can be completely recovered through the backup in other areas, which greatly ensures the security and permanence of the data stored in IPFs.

In addition, IPFs distributed storage can greatly reduce the dependence on the central backbone network, improve the security of Internet data, and effectively prevent DDoS, XSS, CSRF and other attacks.

In terms of storage cost and transmission speed:

IPFs uses peer-to-peer (P2P) technology for data transmission and download, each node can be used as data cache and download, the more nodes, the faster the transmission speed, and the bandwidth cost can be saved by nearly 60%.

In addition, IPFs is a storage mode based on content addressing. The same files will not be stored repeatedly. It will squeeze the excess resources, including the storage space, and reduce the cost of data storage.

Therefore, IPFs can not only greatly reduce the cost of data storage and download, but also the more nodes, the faster the access speed.

It is worth noting that IPFs network not only has no central point, but also each of us can become a node on the IPFs network to build a safer, faster and more efficient Web3.0.

If blockchain is a remolding of traditional Internet technology, IPFs is a remolding of traditional HTTP transport protocol.

标签:ipfs

点击取消回复

    分类

    在线客服x

    客服
    顶部 回到顶部