Every few weeks, or so it seems, a new database is added to an ever-growing and increasingly diverse list.
In addition to the familiar relational databases there are analytical, transactional, in-memory, NoSQL, NewSQL, time series, event streaming, graph, distributed, wide-column, key-value and document stores. Some databases straddle several of these categories, while others are finely tuned to serve a small set of use cases or a specific domain.
The explosion in database technologies, which began about ten years ago with the advent of NoSQL, has been accompanied by a rise in the number of query languages and interfaces, each specific to a certain class of database.
Processing engines are proliferating too, particularly given the popularity of real-time streaming as a way of moving data rapidly in and out of datastores, filtering, querying and aggregating data and managing low-level operations.
So, developers and data engineers are spoiled for choice - which is not necessarily a good thing.
In short, the at-scale data processing environment has become an archipelago of small islands of functionality, navigating between which is not for the faint-hearted, or at least not for those inexperienced in integration. In an environment characterised by rapid change, it's easy to see how this cornucopia of choice can also be a recipe for complexity.
But when complexity rears its many ugly heads, those dedicated to simplification and unity are not far behind.
From Computing (U.K.)
View Full Article
No entries found