Application Integration

The Next Generation Of Extract Transform and Load At #TalendConnect

We are wrapping up three days at TalendConnect this week, where we got a glimpse at the current state of the big data world, and the tools and services that are helping companies, organizations, institutions, and government agencies move their data around. When it comes to the world of data migration, extract transform and load (ETL), and making sure your data is doing what it should be, not much has changed in the last decade, but we did see some significant movements forward in some areas that show signs of the next generation of ETL on the horizon.

All the classic tools for extracting data are there, as well as the tools for mapping, merging, transforming, and enriching, then loading into the systems where it is needed. However, there are other shifts under the hood that show this world is moving forward. Processes can now be isolated, scaled, and distributed using containers. Enabling data to be extracted, migrated, and loaded into any cloud environment, embracing the multi-cloud movement, and acknowledging that our data rarely resides in one place.

There is another sign of a significant shift towards the future, with the injection of machine learning (ML) throughout the data pipeline. Using ML to help make sense of the noise, holding the promise of building intelligence into our data pipelines. Only time will tell what impact this fast-growing technology will have on the ETL process, but it is something that was definitely dominating the conversation. Almost every talk, presentation, and exhibitor mentioned ML, and the impact it was having on the data processes that occur across enterprise organizations.

We did see talk of data streaming at TalendConect, but most of it was about larger streams, driven by Kafka, and other Apache projects. It is clear that the API conversation in this community is just beginning to shift into high gear, with the recent acquisition of Restlet by Talend, and regularly recognizing that all data will ultimately become accessible via APIs. Which means there is a lot of work ahead for this data-rich organizations to mature their data pipelines, deliver APIs, and evolve on their API journey. Realizing the importance of delivering data as streams to their most demanding consumers, and expose the need for an event-driven approach to delivering data across the enterprise, and with their partners.

We had a number of interesting conversations with the attendees at TalendConnect, as well as the leadership at Talend. We are continuing to learn more about their roadmap, and understand the streaming needs of their partners and customers. It is a community that is ripe with opportunity to deliver data in a more meaningful, and real-time way. We learned a lot about the state of the big data ETL world at the event, and we are intrigued by the opportunity to help streamline date pipelines in the Talend ecosystem. As we learn more about Talend, and the community we will publish stories here on the blog, sharing what we learn.

AI in Finance White paper - TalendConnect

**Original source: streamdata.io blog