Apache Cassandra and DataStax

cassandra datastax data modeling

Linear scalability and proven fault-tolerance on commodity hardware or cloud infrastructure

The Apache Cassandra NoSQL database is the right choice when you need scalability and high availability without compromising performance, and with no single point failure.

Hackolade was specially adapted to support the data modeling of Cassandra, including User-Defined Types and the concepts of Partitioning and Clustering keys. It lets users define, document, and display Chebotko physical diagrams. The application closely follows the Cassandra terminology, data types, and Chebotko notation.

The reverse-engineering function includes the table definitions, indexes, user-defined types and functions, but also the inference of the schema for JSON structures if detected in text or blob

View sample documentation Learn more Cassandra Data Modeling

Azure Cosmos DB

Data modeling of document collections

Azure Cosmos DB is Microsoft's globally distributed, multi-model NoSQL database to elastically and independently scale throughput and storage across any number of Azure's geographic regions.

Hackolade was specially adapted to support the data modeling of multiple document types within one single collection. Each Document Type is modeled as a separate entity, so its attributes can be defined separately. We support both SQL API (formerly known as DocumentDB API) and MongoDB API.
To be added soon: Table API, Gremlin API, and Cassandra API.

View sample documentation Learn more
couchbase data modeling

Couchbase

couchbase data modeling

Data modeling of multiple object types within one single bucket, or multiple buckets, if preferred

Couchbase Server has become the de facto standard for building Systems of Engagement. It is designed with a distributed architecture for performance, scalability, and availability. It enables developers to build applications easier and faster by leveraging the power of SQL with the flexibility of JSON.

Hackolade was specially adapted to support the data modeling of multiple object types within one single bucket. Each Document Kind is modeled as a separate entity, so its attributes can be defined separately.

View sample documentation Learn more

AWS DynamoDB : NoSQL database service

Data modeling of fully managed cloud NoSQL database service

Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud NoSQL database and supports both document and key-value store models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications.

Hackolade was specially adapted to support the data modeling of DynamoDB tables including partition (hash) and sort (range) keys, supporting multiple regions as well.

View sample documentation Learn more
dynamodb data modeling

Elasticsearch

elasticsearch data modeling

When you get answers instantly, your relationship with your data changes.

Elasticsearch is a RESTful search and analytics engine based on Apache Lucene. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.

Hackolade was specially adapted to support the NoSQL data modeling of Elasticsearch, including the large choice of data types, and parent-child relationships. We dynamically generate mappings for forward-engineering, and infer schema through document sampling and mappings if available.

View sample documentation Learn more

Google RealTime Firebase

Mobile app success made simple

The Google Firebase Realtime Database is a cloud-hosted database. Data is stored in JSON and synchronized in real time to every connected mobile or other client. It lets developers build rich collaborative applications, with data also persisted locally, to give users a responsive experience.

Hackolade was specially adapted to support the data modeling of data stored as a large JSON tree, with data nodes and their associated keys.

Note: the forward- and reverse-engineering of schemas are not currently available. They're being developed and will be released at a later time.

View sample documentation Learn more
Google RealTIme firebase data modeling

Google Cloud Firestore

Google Cloud Firestore data modeling

Store & sync data globally

Cloud Firestore is a flexible, scalable database for mobile, web, and server development from Firebase and Google Cloud Platform.

Hackolade was specially adapted to support the data modeling of data stored in collections, nested objects, and subcollections.

Note: the forward- and reverse-engineering of schemas are not currently available. They're being developed and will be released at a later time.

View sample documentation Learn more

Apache HBase

When you need random, realtime read/write access to your Big Data

Apache HBase is an open-source, distributed, versioned, non-relational (NoSQL) database modeled after Google's Bigtable. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware.

Hackolade was specially adapted to support the data modeling of HBase, whether you store your data in column families or as a JSON object. With our reverse-engineering function, you can easily discover, document, and enrich the structure of your column families and qualifiers, plus infer the structure of JSON documents you store in HBase.

View sample documentation Learn more
hbase data modeling

Apache Hive

Hive workspace

NoSQL data Modeling for Hadoop Hive

Apache Hive is an open source data warehouse system built on top of Hadoop for querying and analyzing large datasets stored in Hadoop files, using HiveQL (HQL), which is similar to SQL. HiveQL automatically translates SQL-like queries into MapReduce jobs. This provides a means for attaching the structure to data stored in HDFS.

Hackolade was specially adapted to support the data modeling of Hive, including Managed and External tables and their metadata, partitioning, primitive and complex datatypes. It dynamically forward-engineers HQL Create Table scripts, as the structure is built in the application. You may also reverse-engineer Hive instances to display the corresponding ERD and enrich the model. The application closely follows the Hive, terminology and storage structure.

View sample documentation Learn more

MarkLogic

ACID multi-model NoSQL DB for the enterprise

MarkLogic is designed from the ground up to make massive quantities of heterogeneous data easily accessible through search. MarkLogic is a leading 'multi-model' database, supporting JSON documents and RDF triples, all with ACID transactions capabilities.

Hackolade was specially adapted to support the data modeling of NoSQL database MarkLogic, including the JSON definition of model descriptors, geospatial structures, triples and quads, and sub-collections. The application closely follows the terminology of the database.

Note: forward-engineering of JSON Schema is available for use by xdmp.jsonValidate. Reverse-engineering of schemas is not currently available, and will be released at a later time.

View sample documentation Learn more
marklogic data modeling

MongoDB

mongodb mongodb data modeling

Data modeling for MongoDB

MongoDB can help you make a difference to the business. Tens of thousands of organizations, from startups to the largest companies and government agencies, choose MongoDB because it lets them build applications that weren’t possible before. With MongoDB, these organizations move faster than they could with relational databases at one tenth of the cost.

Hackolade was specially built to support the data modeling of MongoDB collections, pioneering a new set of software tools to smooth the onboarding of NoSQL technology in corporate IT landscapes, reduce development time, increase application quality and lower execution risks.

View sample documentation Download Solution Brief Learn more

MongoDB Data Modeling

Neo4j

Data modeling for graph databases

Neo4j is a graph database management system described as an ACID-compliant transactional database with native graph storage and processing.
Hackolade was specially built to support the data modeling of Neo4j node labels and relationship types. It provides a graph view with familiar circular node labels, as well as an Entity-Relationship Diagram view with permanent display of the attributes (or properties) of both node labels and relationship types.
Hackolade dynamically generates Cypher code as the model is created via the application. It also lets you perform reverse-engineering of existing instances, so you can enrich the model with descriptions and constraints, then produce a complete, clickable HTML documentation for distribution to all application stakeholders.
The application closely follows the terminology of the NoSQL, pioneering a new set of software tools to smooth the onboarding of NoSQL database technology in corporate IT landscapes, reduce development time, increase application quality, and lower execution risks.
Hackolade is not a graph visualization tool, but a tool for schema design of Neo4j graph databases.

View sample documentation Learn more Neo4j Data Modeling
Neo4j graph data modeling Neo4j data modeling

JSON Schema

Plain JSON REST APIs data modeling

JSON is increasingly dominating the application development world, especially when the target platform is mobile.

Hackolade is a visual editor of JSON Schema draft v4. It supports all the advanced features, including choices and polymorphism. With Hackolade, it is easy to visually create a JSON Schema from scratch, and without prior knowledge of the syntax. You can also easily derive JSON Schema from JSON document files.

View sample documentation Learn more

Apache Avro Schema

Serialize data in Hadoop and stream data with Kafka

Apache Avro is a language-neutral data serialization system, developed by Doug Cutting, the father of Hadoop. Avro is a preferred tool to serialize data in Hadoop. It is also the best choice as file format for data streaming with Kafka. Avro serializes the data which has a built-in schema. Avro serializes the data into a compact binary format, which can be deserialized by any application. Avro schemas defined in JSON, facilitate implementation in the languages that already have JSON libraries. Avro creates a self-describing file named Avro Data File, in which it stores data along with its schema in the metadata section.

Hackolade was specially adapted to support the data modeling of Avro schema NoSQL databases. It closely follows the Avro terminology, and dynamically generates Avro schema for the structure created with a few mouse clicks. Hackolade easily imports the schema from .avsc or .avro files to represent the corresponding Entity Relationship Diagram and schema structure.

View sample documentation Learn more Avro Schema Editor
avro schema design

Swagger & OpenAPI

Swagger API design

Visual editor of Swagger & OpenAPI design

Creating APIs is not easy! And writing Swagger documentation in a design-first approach can be tedious at best, generally error-prone and frustrating...

Hackolade takes a visual schema-centric approach so you can focus on the content of requests and responses. The application also assists with all the metadata to produce validated Swagger files and test the transactions.

You can also reverse-engineer existing Swagger files in JSON or YAML to produce a graphical representation of your APIs.

Swagger plugin details OpenAPI plugin details Swagger API design editor

TinkerPop w/ Gremlin API

Data modeling for graph databases (OLTP) and graph analytics systems (OLAP)

Apache TinkerPop is an open source, vendor-agnostic, graph computing framework distributed under the Apache2 license. When a data system is TinkerPop-enabled, its users are able to model their domain as a graph and analyze that graph using the Gremlin graph traversal language.
Hackolade was specially built to support the data modeling of TinkerPop vertex labels and edge labels. It provides a graph view with familiar circular vertex labels, as well as an Entity-Relationship Diagram view with permanent display of the attributes (or properties) of both vertex labels and edge types.
Hackolade dynamically generates Gremlin code as the model is created via the application. It also lets you perform reverse-engineering of existing instances, so you can enrich the model with descriptions and constraints, then produce a complete, clickable HTML documentation for distribution to all application stakeholders.
Hackolade is not a graph visualization tool, but a tool for schema design of TinkerPop graph databases.

View sample documentation Learn more
tinkerpop data modeling

AWS Glue Data Catalog

AWS Glue Data Catalog

Visual editor for the Glue Data Catalog table structures

The AWS Glue Data Catalog is a fully managed, Apache Hive 2.x metadata repository for all data assets of your Glue ETL, regardless of where they are located. The Data Catalog contains table definitions, job definitions, and other control information to help manage a AWS Glue ans NoSQL DB environment.

Hackolade was specially adapted to support the data modeling of the the AWS Glue Data Catalog, including Glue metadata and Hive primitive and complex datatypes, and producing both AWS CLI commands and HQL Create Table syntax. The application closely follows the Hive terminology and storage structure.

You can also reverse-engineer existing your Glue Data Catalog to produce a graphical representation of your assets.

View sample documentation AWS Glue Data Catalog plugin details

Apache Parquet Schema

Visual schema design to serialize data in columnar format

Apache Parquet is a binary file format that stores data in a columnar fashion for compressed, efficient columnar data representation in the Hadoop ecosystem, and in cloud-based analytics.
Hackolade is a visual editor for Parquet schema for non-programmers, and specifically adapted to support the schema design of Parquet files. It supports the Parquet structure, data types, logical types, encodings, compression codecs, and all other standard metadata.
Hackolade dynamically generates parquet schema as the model is created via the application. It also lets you perform reverse-engineering of files on the local system, shared directories, AWS S3 buckets, Azure Blob Storage, or Google Cloud Storage.

View sample documentation Learn more
Parquet schema design

ScyllaDB

ScyllaDB data modeling

Lightning fast throughput and ultra-low latency

ScyllaDB is an open-source distributed NoSQL column-oriented data store, designed to be compatible with Apache Cassandra. It supports the same CQL query language but is written in C++ instead of Java to increase raw performance and leverage modern multi-code servers self-tuning.

Hackolade was specially adapted to support the data modeling of ScyllaDB, including User-Defined Types and the concepts of Partitioning and Clustering keys. It lets users define, document, and display Chebotko physical diagrams. The application closely follows the ScyllaDB terminology, data types, and Chebotko notation.

The reverse-engineering function includes the table definitions, indexes, user-defined types and functions, but also the inference of the schema for JSON structures if detected in text or blob.

View sample documentation Learn more ScyllaDB Data Modeling

Snowflake

Cloud-based data warehousing

Snowflake’s architecture is a hybrid of traditional shared-disk database architectures and shared-nothing database architectures. It supports the most common standardized version of SQL: ANSI.

Hackolade was specially adapted to support the data modeling of Snowflake, including schemas, tables and views, indexes and constraints, plus the generation of DDL Create Table syntax as the model is created via the application. In particular, Hackolade has the unique ability to model complex semi-structured objects stored in columns of the VARIANT data type. The reverse-engineering function, if it detects JSON documents, will sample records and infer the schema to supplement the DDL table definitions.

View sample documentation Learn more
Snowflake data modeling

Microsoft SQL Server

Microsoft SQL Server data modeling

and Azure SQL Database

SQL is aimed at different audiences and for workloads ranging from small single-machine applications to large Internet-facing applications with many concurrent users

Hackolade has the unique ability to model complex semi-structured objects stored in columns of the (N)VARCHAR(MAX) data type. The reverse-engineering function, if it detects JSON documents, will sample records and infer the schema to supplement the DDL table definitions.

Hackolade was specially adapted to support the data modeling of SQL Server and Azure SQL Database, including schemas, tables and views, indexes and constraints, plus the generation of DDL Create Table syntax.

View sample documentation Learn more

EventBridge Schema Registry

Build better event-driven serverless applications

Amazon EventBridge is a serverless event bus that makes it easy to connect applications together using data from applications. The EventBridge Schema Registry gives you the ability to control and manage the lifecycle of schemas for your microservices and events.
Hackolade leverages the visual schema-centric approach of its OpenAPI 3 plugin to support the creation and maintenance of EventBridge schemas.
Hackolade dynamically generates the schema so it can be applied to your Amazon EventBridge Schema Registry. It also lets you perform reverse-engineering of your registry, and create HTML or PDF documentation.

View sample documentation Learn more
Amazon EventBridge Schema Registry

Azure Synapse Analytics

Azure Synapse Analytics

Data modeling for serverless analytics

Azure Synapse Analytics and Parallel Data Warehouse are a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It uses either serverless or provisioned resources with a unified experience to ingest, prepare, manage, and serve data for immediate BI and machine learning needs.

Hackolade has the unique ability to model complex semi-structured objects stored in columns of the (N)VARCHAR(MAX) data type. The reverse-engineering function, if it detects JSON documents, will sample records and infer the schema to supplement the DDL table definitions.
Hackolade was specially adapted to support the data modeling of Azure Synapse Analytics and Parallel Data Warehouse, including schemas, tables and views, plus the generation of DDL Create Table syntax.

View sample documentation Learn more