Writing The Book That Offers A Single Reference For The Fundamentals Of Data Engineering

Data Engineering Podcast - Een podcast door Tobias Macey - Zondagen

Categorieën:

Summary Data engineering is a difficult job, requiring a large number of skills that often don’t overlap. Any effort to understand how to start a career in the role has required stitching together information from a multitude of resources that might not all agree with each other. In order to provide a single reference for anyone tasked with data engineering responsibilities Joe Reis and Matt Housley took it upon themselves to write the book "Fundamentals of Data Engineering". In this episode they share their experiences researching and distilling the lessons that will be useful to data engineers now and into the future, without being tied to any specific technologies that may fade from fashion. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! Atlan is the metadata hub for your data ecosystem. Instead of locking your metadata into a new silo, unleash its transformative potential with Atlan’s active metadata capabilities. Push information about data freshness and quality to your business intelligence, automatically scale up and down your warehouse based on usage patterns, and let the bots answer those questions in Slack so that the humans can focus on delivering real value. Go to dataengineeringpodcast.com/atlan today to learn more about how Atlan’s active metadata platform is helping pioneering data teams like Postman, Plaid, WeWork & Unilever achieve extraordinary things with metadata and escape the chaos. Prefect is the modern Dataflow Automation platform for the modern data stack, empowering data practitioners to build, run and monitor robust pipelines at scale. Guided by the principle that the orchestrator shouldn’t get in your way, Prefect is the only tool of its kind to offer the flexibility to write code as workflows. Prefect specializes in glueing together the disparate pieces of a pipeline, and integrating with modern distributed compute libraries to bring power where you need it, when you need it. Trusted by thousands of organizations and supported by over 20,000 community members, Prefect powers over 100MM business critical tasks a month. For more information on Prefect, visit dataengineeringpodcast.com/prefect today. Your host is Tobias Macey and today I’m interviewing Joe Reis and Matt Housley about their new book on the Fundamentals of Data Engineering Interview Introduction How did you get involved in the area of data management? Can you explain what possessed you to write such an ambitious book? What are your goals with this book? What was your process for determining what subject areas to include in the book? How did you determine what level of granularity/detail to use for each subject area? Closely linked to what subjects are necessary to be effective as a data engineer is the concept of what that title encompasses. How have the definitions shifted over the past few decades? In your experiences working in industry and researching for the book, what is the prevailing view on what data engineers do? In the book you focus on what you term the "data lifecycle engineer". What are the skills and background that are needed to be successful in that role? Any discussion of technological concepts and how to build systems tends to drift toward specific tools. How did you balance the need to be agnostic to specific technologies while providing relevant and relatable examples? What are the aspects of the book that you anticipate needing to revisit over the next 2 – 5 years? Which elements do you think will remain evergreen? What are the most interesting, unexpected, or challenging lessons that you have learned while working on writing "Fundamentals of Data Engineering"? What are your predictions for the future of data engineering? Contact Info Joe LinkedIn Website Matt LinkedIn @doctorhousley on Twitter Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Links Fundamentals of Data Engineering (affiliate link) Ternary Data Designing Data Intensive Applications James Webb Space Telescope Google Colossus Storage System DMBoK == Data Management Body of Knowledge DAMA Bill Inmon Apache Druid RTFM == Read The Fine Manual DuckDB Podcast Episode VisiCalc Ternary Data Newsletter Meroxa Podcast Episode Ruby on Rails Lambda Architecture The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

Visit the podcast's native language site