Big data dies and ramp-up of smart applications



[ad_1]
<div _ngcontent-c14 = "" innerhtml = "

GettyGetty

We're seeing the shift from big data collection to real-time data consumption, and 2019 will be a turning point, as smart applications – not users – will become the biggest data consumers. With this in mind, here is an overview of trends in the data trenches that are facilitating this shift for businesses around the world and the expected consequences for many aspects of technology.

Expensive mega projects become iterative micro-adjustments

I saw first-hand how companies from all walks of life are turning away from big data moonshots, such as all data lakes or vast Internet of Things (IoT) platforms – to take advantage of the incremental improvements by taking advantage of the "in the moment" information, and I think this trend will accelerate in 2019. Partly a recognition of failure: projects cost too much and took too long to implement (if they were even completed). Many have never been delivered as promised, and those that were finished by being obsolete by the time they were ready for production. It is also a recognition of the larger shift from slow planning cycles to rapid iteration. When obtaining data is slow and difficult, long planning is required and complexity is expected. Getting data is fast and easy, quick iteration and adjustment win every time.

Brooks proliferate as lakes dry up

A corollary to this is that 2019 will mark the decline of the data lake, organizations seeking to take advantage of real-time data transmission rather than transfer them into a vast but slow and cloudy reservoir in the hope of 39, draw any value from it. The Data Lakes reflect mega projects that recall previous Big Data initiatives, which generated a lot of revenue for consultants but few results for organizations. Even though it will certainly remain necessary to store and archive the data for long-term needs, I think companies will focus more on extracting value and information as that flow of data into the organization rather than just collecting data for later refinement and use.

Data is now part of the fabric of the company

With the move to real-time processing of fast data will come a The emergence of new technologies in the supply of a data matrix or a backbone that applications and users exploit to get the necessary data, instead of storing their own copies. A 2017 survey company executives NewVantage notes that more than 85% of companies have launched programs to create a data-driven culture. New technologies – especially new generation publishing / subscription messaging solutions and badociated messaging solutions – make this possible and bring many benefits.

One of them is the simplicity, the companies moving away from a mosaic of data systems and data stores to adopt a more integrated data structure at the same time. scale of the company. Second, we see the & nbsp;silo removalWhen data is distributed and ubiquitous rather than trapped in discrete and isolated systems, silos disappear and data becomes simultaneously more accessible and more valuable.

The data processing extends until the end

The processing of data and the reaction to it are progressively closer to their place of origin, which in many cases is at the level of the enterprise. Cloud computing providers certainly hope to integrate that into the cloud, but I predict that it will not happen, at least not completely. This is largely due to the IoT, and I'm not talking just about your connected toaster. In an industrial setting, these devices can contain serious muscle problems and important data sources. Think of robotic manufacturing equipment, transportation equipment, medical devices and more. While the IoT will certainly encompbad a large number of simple sensors that "phone home" with the latest data for central processing or in the cloud, the focus will be more and more on understanding and understanding. Use of these data closer to the edge, where the action is.

Serverless finally makes Multicloud a reality

This may be an exaggeration, but it does highlight one point: two of the most popular topics in technology today – multilayered (using multiple cloud provider platforms) and serverless architectures (summarizing the infrastructure required for code execution) – are only two. different looks on the same realization that most people probably do not care about using Amazon Web Services, Azure or Google Cloud Platform. It seems lots of people do not bother which service hosts an application. Most users do not worry that they are on a public, private, on-premise or all-above-mentioned cloud. All that interests them is to submit their code or obtain their data. In this regard, distributed data will become the glue of everything, while an application would provide the intelligence needed to act on that data.

Which brings us to the final prediction, the one we started with.

Smart apps – not people – will become the biggest consumer of data

Even if it does not happen at once in 2019, I think that in the next few years we will be able to look back and see that this year marked a turning point: Artificial intelligence, data badysis and automatic learning integrated applications have been replaced. reports, dashboards and other person-centered outings as the main data consumers. The software will be empowered to act on the data for us – whether it's machine-to-machine or consumer-machine – rather than just surfacing so that people can examine them and use them to take decisions.

This will have profound implications not only in terms of technology, but also in terms of how people make their decisions. I will also launch a bonus forecast: the ramp-up of the smart app will solve the staffing data crisis. There has long been a gap between the growing and growing number of job opportunities in data badysis and the shortage of trained candidates. This does not mean that data badysis will be less valued – far from it. Someone still needs to create data models and badyzes that tell the software what to do. But software will increasingly have to bear the burden of consumption and data badysis.

Does this mean that we are inevitably heading towards the day when we are replaced by our robot lords with AI? I will leave any prediction about it for another year.

Forbes Technology Council is an invitation-only community for world-clbad CIOs, technical directors, and technology leaders.
Am I eligible?

">

We're seeing the shift from big data collection to real-time data consumption, and 2019 will be a turning point, as smart applications – not users – will become the biggest data consumers. With this in mind, here is an overview of trends in the data trenches that are facilitating this shift for businesses around the world and the expected consequences for many aspects of technology.

Expensive mega projects become iterative micro-adjustments

I saw first-hand how companies from all walks of life are turning away from big data moonshots, such as all data lakes or vast Internet of Things (IoT) platforms – to take advantage of the incremental improvements by taking advantage of the "in the moment" information, and I think this trend will accelerate in 2019. Partly a recognition of failure: projects cost too much and took too long to implement (if they were even completed). Many have never been delivered as promised, and those that were finished by being obsolete by the time they were ready for production. It is also a recognition of the larger shift from slow planning cycles to rapid iteration. When obtaining data is slow and difficult, long planning is required and complexity is expected. Getting data is fast and easy, quick iteration and adjustment win every time.

Brooks proliferate as lakes dry up

A corollary to this is that 2019 will mark the decline of the data lake, organizations seeking to take advantage of real-time data transmission rather than transfer them into a vast but slow and cloudy reservoir in the hope of 39, draw any value from it. The Data Lakes reflect mega projects that recall previous Big Data initiatives, which generated a lot of revenue for consultants but few results for organizations. Even though it will certainly remain necessary to store and archive the data for long-term needs, I think companies will focus more on extracting value and information as that flow of data into the organization rather than just collecting data for later refinement and use.

Data is now part of the fabric of the company

With the move to real-time processing of fast data will come a The emergence of new technologies in the supply of a data matrix or a backbone that applications and users exploit to get the necessary data, instead of storing their own copies. A 2017 Survey of Business Executives NewVantage notes that more than 85% of companies have launched programs to create a data-driven culture. New technologies – especially new generation publishing / subscription messaging solutions and badociated messaging solutions – make this possible and bring many benefits.

One of them is the simplicity, the companies moving away from a mosaic of data systems and data stores to adopt a more integrated data structure at the same time. scale of the company. Second, we see the silo removalWhen data is distributed and ubiquitous rather than trapped in discrete and isolated systems, silos disappear and data becomes simultaneously more accessible and more valuable.

The data processing extends until the end

The processing of data and the reaction to it are progressively closer to their place of origin, which in many cases is at the level of the enterprise. Cloud computing providers certainly hope to integrate that into the cloud, but I predict that it will not happen, at least not completely. This is largely due to the IoT, and I'm not talking just about your connected toaster. In an industrial setting, these devices can contain serious muscle problems and important data sources. Think of robotic manufacturing equipment, transportation equipment, medical devices and more. While the IoT will certainly encompbad a large number of simple sensors that "phone home" with the latest data for central processing or in the cloud, the focus will be more and more on understanding and understanding. Use of these data closer to the edge, where the action is.

Serverless finally makes Multicloud a reality

This may be an exaggeration, but it does highlight one point: two of the most popular topics in technology today – multilayered (using multiple cloud provider platforms) and serverless architectures (summarizing the infrastructure required for code execution) – are only two. different looks on the same realization that most people probably do not care about using Amazon Web Services, Azure or Google Cloud Platform. It seems lopeople do not care about which service hosts an application. Most users do not worry that they are on a public, private, on-premise or all-above-mentioned cloud. All that interests them is to submit their code or obtain their data. In this regard, distributed data will become the glue of everything, while an application would provide the intelligence needed to act on that data.

Which brings us to the final prediction, the one we started with.

Smart apps – not people – will become the biggest consumer of data

Even if it does not happen at once in 2019, I think that in the next few years we will be able to look back and see that this year marked a turning point: Artificial intelligence, data badysis and automatic learning integrated applications have been replaced. reports, dashboards and other person-centered outings as the main data consumers. The software will be empowered to act on the data for us – whether it's machine-to-machine or consumer-machine – rather than just surfacing so that people can examine them and use them to take decisions.

This will have profound implications not only in terms of technology, but also in terms of how people make their decisions. I will also launch a bonus forecast: the ramp-up of the smart app will solve the staffing data crisis. There has long been a gap between the growing and growing number of job opportunities in data badysis and the shortage of trained candidates. This does not mean that data badysis will be less valued – far from it. Someone still needs to create data models and badyzes that tell the software what to do. But software will increasingly have to bear the burden of consumption and data badysis.

Does this mean that we are inevitably heading towards the day when we are replaced by our robot lords with AI? I will leave any prediction about it for another year.

[ad_2]
Source link