will innovation appear in the cloud, on the edge or elsewhere?-凯时国际
time:2018-01-08
views:588
innovation is critical to organizations that maintain business relevance and avoid business disruptions, but where will those innovations come from?
industry experts believe that innovation does not happen in the cloud, but at the brink. however, edge computing is just an extension of cloud computing. so what does this mean? because cloud computing and edge computing may work together.
in addition, apple‘s iphone x mobile phone recently introduced facial recognition technology such as the technology will give users more personal information risk, which attracted people‘s attention.
prior to this, apple‘s smart devices used fingerprinting, while some android smart devices used iris recognition. therefore, the plot in science fiction soon became a scientific fact.
businesses need to be proactive, especially with the eu‘s general data protection regulations (gdprs), which take effect five months later. to ensure that retailers, government agencies, emergency services, and other organizations do not violate regulatory standards, one needs to consider whether facial recognition, license plate recognition, vehicle sensors and other technologies can meet the requirements and requirements of the gdpr.
empower citizens
jim mcgann, vice president of marketing and business development at index engines, puts forward his own thinking on these legal requirements: "gdpr gives the power of personal data to citizens, so companies that do business in the european union, including the united states, must comply with this rule "
he added that gdpr poses a key issue for the organization‘s data management. in many cases, organizations find it difficult to find personal data in their system or paper records. and usually they do not know if the data needs to be saved, deleted, modified or corrected. therefore, gdpr will push the responsibility of the organization to a new height due to the huge penalty that it may face.
however, he provided recommendations for adopting relevant solutions: "we provide information management solutions and application strategies to ensure the organization‘s business complies with data protection regulations." pb-level data needs to be collated, but the organization does not have any data on what real understanding.index engines offers clean-up services by looking at different data sources to see what can be cleared.many organizations can release 30% of the data, which allows them to manage the data more efficiently.once the organization can by effectively managing the data, they can implement the appropriate policies and measures because most companies know what types of files contain personal data. "
clear data
mcgann continued: "most of the data is very sensitive, so many companies are reluctant to talk about it, but we do a lot of work with legal consulting firms to keep the organization in compliance."
for example, index engine, a fortune 500 company, completed data cleanup and found that 40% of its data no longer contained any commercial value. so the company decided to clear it up.
he said: "this saves data center management costs: they get positive results by cleaning up the data, but if it‘s a public company, you‘re not free to delete the data because of regulatory compliance issues." in some cases, need to save the file up to 30 years. he suggested that "businesses need to ask whether these files have commercial value or any compliance requirements." for example, data can be deleted without a valid reason to save it. some companies are also migrating their data to the cloud in order to delete data from the data center.
in the process, many companies need to check whether the data is of commercial value in order to make their data migration decisions. organizations need to think about what‘s in their files - whether it‘s edge computing or cloud computing for data management, backup and storage.
make sure the information is compliant
therefore, it is important that organizations explore ways to prevent new technologies from being used by consumers and citizens alike and consider how to use that data to create value for organizations and consumers. organizations using this data need to be aware of information security in providing, using, protecting, and improving digital services.
for example, face recognition technology has many applications that serve not only to allow users to unlock applications on their smartphone but also to pay for them. the images are stored in the locally deployed data center using the smartphone‘s facial recognition technology. in spite of this, people still need to keep a certain amount of data on the database, which also needs to be protected to prevent hackers from exploiting personal data for malicious attacks.
innovation in edge computing
as organizations increasingly invest in autonomous and smart cities, as well as automotive technologies such as automated emergency braking (aeb) networking, there is also a need to consider in 2018 innovation sites and the need to achieve compliance and innovation balance.
in addition, more and more people think that innovation will appear in the marginal computing rather than the cloud, and edge computing is just an extension of cloud computing. even though data needs to be sourced closer to the source, large amounts of data still need to be analyzed elsewhere. data and network delays are a historical hurdle, and one hopes that the impact of delays can be reduced or eliminated.
edge computing expands the capacity of the data center by allowing a large number of smaller data centers to store, manage, and analyze data while allowing some data to be managed and locally analyzed by a disconnected device or sensor (such as connected autonomous vehicles) . once the network connection, its data can be backed up to the cloud for further action.
data acceleration
reducing network latency and data latency can improve the customer experience. however, due to the greater potential for data transfer to the cloud, network latency and packet loss can have a considerable negative impact on data throughput. without machine intelligence solutions such as portrock it, the effects of latency and packet loss may inhibit data and backup performance.
if the database of facial recognition technology is unable to quickly transmit citizenship and immigration information, this may result in delays at airports and possible accidents or technical problems with autonomous vehicles.
with the advent of autonomous car technology, the data generated by cars will travel between vehicles in a continuous way. some of these data, such as criticality and safety data, require quick turnaround, while other data is typically road information such as traffic flow and speed. auto-driving cars send all their safety-critical data back to the central cloud over 4g or 5g networks, potentially adding significant data latency to turnarounds before they start receiving data due to network delays. there is currently no simple and economical way to reduce the latency between networks. the speed of light is a major factor that people can not change. therefore, how to manage the network and data delay effectively and efficiently is very important.
large data challenge
hitachi said auto-driving cars will create around 2pb of data per day. it is estimated that a networked car will create about 25tb bytes of data per hour. considering that there are more than 800 million cars in the united states, china and europe. as a result, a billion units will be exceeded in the near future, and if half the cars are fully networked, assuming an average usage of 3 hours a day, there will be 37.5 billion gigabytes of data per day.
if, as expected, most of the new cars were autonomously-driven cars in the mid-1920s, that number would be trivial. obviously, not all data can be immediately sent back to the cloud without some level of data validation and reduction. there must be a compromise solution, and edge computing can support this technique and can be applied in autonomous vehicles.
from a physical point of view, storing ever-increasing amounts of data will be a challenge. the size and size of the data is sometimes very important. this creates financial and economic problems per gb cost. for example, while electric vehicles are considered the mainstream of the future, power consumption is bound to increase.
in addition, it is also necessary to ensure that large amounts of data created by individuals or devices do not violate data protection legislation.