How GitOps and Docker containers are changing the role of Data Modeling,
now at the center of end-to-end metadata management.
From data dictionaries to schema contracts. And back.
We are often asked by prospects what sets Hackolade apart from traditional data modeling tools. There are many obvious differences:
- our unique ability to represent nested objects in Entity-Relationship diagrams and visualize physical schemas for "schemaless" databases;
- the simplicity and power of a clean and intuitive user-interface;
- our vision to support a large variety of modern storage and communication technology, often cloud-based;
- our care and speed to respond to customer requests and suggestions (we release 50+ times per year);
- our product-led-growth strategy with no pushy salespeople;
But we tend to think that the true revolution enabled by Hackolade is our metadata-as-code approach. It all started with our Command-Line Interface to let customers automatically generate schemas and scripts, or reverse-engineer an instance to infer the schema, or to reconcile environments and detect drifts in indexes or schemas. The CLI also lets customers automatically publish documentation to a portal or data dictionary when data models evolve.
Today, it has become obvious to many of our customers that success in the world of self-services analytics, data meshes, as well as micro-services and event-driven architectures can be challenged by the need to maintain end-to-end synchronization of data catalogs/dictionaries with the constant evolution of schemas for databases and data exchanges.
In other words, the business side of human-readable metadata management must be up-to-date and in-sync with the technical side of machine-readable schemas. And that process can only work at scale if it is automated.
Data models provide an abstraction describing and documenting the information system of an enterprise. Data models provide value in understanding, communication, collaboration, and governance. They are great to iterate designs without writing a line of code. They are easy to read for humans and can serve as reference to feed data dictionaries used by the various data citizens on the business side.
Schemas provide “consumable” collections of objects describing the layout or structure of a file, a transaction, or a database. A schema is a scope contract between producers and consumers of data, and an authoritative source of structure.
It is hard enough for IT departments to keep in-sync schemas across the various technologies involved in data pipelines. For data to be useful, the business users must have an up-to-date view of the structures.
Metadata provides meaning and context to business users so they can derive precise knowledge and intelligence from data structures that may lack nuance or be ambiguous to interpret without thorough descriptions. This is critical for proper reporting and decision making.
Hackolade is a unique data modeling tool with the technical ability to facilitate strategic business objectives. Customers leverage the Command-Line Interface, invoking functions from a Jenkins CI/CD pipeline, from a command prompt, often with Docker containers. Combined with Git as repository for data models and schemas, users get change tracking, peer reviews, versioning with branches and tags, easy rollbacks, as well as offline mode.
Putting it all together is easy with integration of all the pieces of the puzzle. Design and maintain data models in Hackolade, then publish to data dictionaries so business users always have an up-to-date view of data structures being deployed across the technology landscape with synchronized schemas. Voilà!
Note that some of our customers have pushed this even further. Leveraging custom properties, they create hooks for code generation of data classes and REST APIs used in different application components to provide the foundation for architectural lineage.
Contact us at email@example.com if you have any questions.