Home » Metadata management
Metadata 5 upscaled scaled

Complimentary access to this report is no longer available. Access the latest Gartner® content here.

Has your organization recognized that active metadata is essential for success?

It’s the topic on everybody’s lips right now, and for good reason. Wherever data is used, active metadata makes it more informative, trustworthy, controllable and governable.

It has always been at the heart of everything we do. With Solidatus, your data will be more dynamic, more complete, and your decisions will be made based on context as well as content.

In their latest report, Gartner® gave the quick answer to the much-asked question: what is active metadata?

Gartner ® includes an important prediction in the research that highlights just how important active metadata could be for your organization: “By 2026, organizations adopting active metadata practices will increase to 30% across data and analytics to accelerate automation, insight discovery and recommendations.”

Will your organization be part of that 30%?

Gartner, Quick Answer: What Is Active Metadata?, by Melody Chien, Mark Beyer, Thornton Craig, Robert Thanaraj, 29 March 2023 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Gartner logo top

Latest news

Active Metadata blog scaled

By Tom Khabaza, Principal AI Consultant at Solidatus

Earlier this year, we kicked off a blog series on active metadata. In that first blog post, entitled From data to metadata to active metadata, I provided an overview of what it is and the challenges it addresses. 

To recap, metadata is data about an organization’s data and systems, about its business structures, and about the relationships between them. How can such data be active? It is active when it is used to produce insight or recommendations in an automated fashion. Below I’ll use a very simple example to demonstrate the automation of insight with active metadata.

In this – the second of this ongoing four-part series – we expand on the subject to look at the role of metadata in organizations, and how dynamic visualization and inference across metadata are essential to provide an up-to-date picture of the business. This is active metadata, and it is fundamental to the well-being of organizations.

Let’s start with a look at metadata in the context of an organization.

An organization and its metadata

Imagine a simple commercial organization. It has operational departments for sales, production and other operations, and corresponding systems which record their function. Transactions from each of these systems are copied into an accounting system, and from there into a financial report. The systems and the flow of data between them are depicted in a metadata model below:  

Blog02 screenshot01

Metadata showing systems and information flow 

Data quality is also metadata

Financial reports must be accurate, both because they are used by executives in decision-making, and because they are part of the public face of the company, enabling it to attract investment and comply with regulations. The quality of the data in all the systems which feed these reports are therefore monitored closely, in this example using a data quality metric “DQ”, a property of each metadata entity, is scored from 1 (very poor quality) to 10 (top quality). In the metadata view below, these DQ values are shown as tags; in this example all the data is of top quality (DQ 10). Assuming that data quality measurement is automatic, these values will be updated automatically on a regular basis. 

Blog02 screenshot02

Metrics showing top-quality data 

Data quality flows through the organization

A data quality metric is calculated from the data in the system to which it applies, but such metrics will often assume that the data being fed into the system is correct. This means that upstream data quality problems can have an undetectable impact on downstream data quality, so it’s important for data quality monitoring to take account of data flowing between systems. The metadata view below indicates the flow of good-quality data by a green arrow, and the presence of good-quality data by a green box. Again, this view indicates that top-quality data is present throughout. Unlike the previous view, however, constructing this view requires inference across the metadata: an entity will be displayed as green not only if it contains top-quality data but also only if all its incoming connections are providing top-quality data. This is a more informative view that will show the propagation of data quality problems when they occur. 

Blog02 screenshot03

Every data flow is top-quality 

Spotting local data quality problems

Now suppose that a data quality problem occurs in the operational systems: the data quality metric is reduced from 10 to 9. The data quality metrics view shows the problem clearly but does not show its consequences; local data quality metrics imply that everything is all right downstream from the problem. 

Blog02 screenshot04

A data quality problem shows up locally 

Inferring data quality problems actively

However, the more dynamic view of data quality shows the problem immediately. The active, inferential nature of this metadata allows us to color data flows red if they may be carrying low-quality data, and the systems red if they may hold low-quality data as a result of upstream data quality problems.  

Blog02 screenshot05

Active metadata inference shows the flow of data quality problems 

The downstream transmission of data quality problems is obvious in this view: the financial report cannot be trusted. This dynamic alert is the first step towards fixing the problem, but it goes further than that. An alert system of this kind means that, when no alert appears, the data and reports are trustworthy, an important attribute of data systems both for management decision-making and for regulatory compliance. 

Active metadata, inference and dynamic insight

To sum up, it is important that an organization recognizes metadata to be dynamic in nature and that this is made easy to visualize and react accordingly. It’s not enough to hold static information about the qualities of our systems and data; we must also keep it up to date with automatic metadata-feeds, infer continuously the consequences of any changes, and make these consequences visible immediately. Here I have illustrated the consequences of this approach for data quality and trust in financial reporting, but the same principles apply in a wide variety of contexts. 

Wherever data is used – which today is almost everywhere – active, dynamic, inferential metadata makes it more informative, trustworthy, controllable and, ultimately, governable. Active, inferential, visual metadata is essential for the well-governed organization. 

Quick Answer: What Is Active Metadata?

In the latest Gartner® report, find out what active metadata is, how to use it, and how to get started.

Metadata 6 upscaled80 scaled

Last month, Lexology reported on the new data standards that will form a part of the National Defense Authorization Act (NDAA)

Requiring the US Treasury Department “to set, and other federal regulators to implement, data reporting standards for the financial entities that they regulate,” it’s just one example of the many new regulatory hurdles – not just in the US but around the world – that data practitioners within banks and other regulated industries are having to navigate.

Against this backdrop, our recent announcement that Solidatus and BigID are partnering to extend data governance will be hugely welcome, as it’ll satisfy increasing user demand for a unified solution to expose, link and visualize organizational metadata that enables actionable insights from quality, privacy and security data.

But in this short blog post, we’ll dig a little deeper, asking:

  • Who BigID are;
  • What this partnership means for clients;
  • The key benefits of tapping into this partnership as a user of both services; and
  • What it will enable you in your work.

Who are BigID?

BigID are a leader in data security, privacy, compliance, and governance: enabling organizations to proactively discover, manage, protect, and get more value from their data in a single platform for data visibility and control.

Combining this data platform with the Solidatus metadata governance and lineage capability gives clients full management and control of both their data and their metadata, ensuring full control and transparency that is critical for successful data management.

BigID helps clients manage the scope and scale of their enterprise data across multiple cloud instances.

What does this partnership mean for clients?

Linking your BigID infrastructure with your Solidatus enterprise lineage modelling allows you to:

  • Enrich your Solidatus models with the advanced metadata within BigID; and
  • Help visualise the complexity of your enterprise data within BigID.

The combination allows you to:

  • Get value from your BigID implementation faster;
  • Get a consistent view across your BigID and legacy environments; and
  • Reduce the risk of migration legacy systems to BigID.

What are the key benefits of this approach?

Using Solidatus to visualize and understand the structure and scope of metadata within BigID allows firms to gain insight into the source, flow, and use of data across the entire organisation. Key benefits include:

  • Increases knowledge and understanding of data in the organization
  • Ability to apply views and filters specific to business users and help gain trust and confidence
  • Speeds up implementation of BigID in complex environments
  • BigID can deliver additional metadata to Solidatus, which enriches the lineage models and increases accuracy and confidence 
  • Speeds up adoption by making the data visually available in an easy-to-understand format for all stakeholders
  • Allows consistent views across BigID and legacy platforms, bringing consistency and understanding
  • Visual approach valuable for assurance and compliance
  • Creates a valuable change management tool for business changes and migration efforts

What does this enable?

Combining these two platforms allows clients to:

  • Enrich the enterprise blueprints in Solidatus with BigID-sourced data, such as: data quality; data classifications; and volumes etc
  • Help visualize the complexity within BigID
  • Help to capture and share knowledge of the enterprise data across the organization
  • Reduce the risk of losing knowledge within the organization if key resources leave or are unavailable
  • Model regulations and corporate policies within Solidatus and link to the practical implementation of obligations to the impacted data
  • Bring all the components of your front-to-back flows into one, unified view

To wrap up, let’s whet your appetite with a view of what some BigID catalog info looks like when mapped into a Solidatus model.

BigID demo 1
Red and blue lines 1 HIGH RES scaled

Running a tight data governance and regulatory compliance regime is nigh on impossible without good data lineage and active metadata management software.

In fact, in-depth knowledge of data and how it’s used is crucial to all organizations above a certain size, pretty much regardless of their data- or systems-related area of endeavor. Apart from a regulatory imperative to understand the data flows within your organization, data lineage is fundamental to getting the right data to the right people, at the right time – this is the basis of sound decision-making, risk management and new data-driven initiatives.

So we’re agreed on the value of data lineage. But how do you map your data and systems into lineage models – or end-to-end metadata maps – in the first place? And, how can you derive end-to-end data flows where documentation doesn’t exist or is outdated?

Mapping, fast and slow

Well, one way is to do it manually. This is tried-and-tested and usually the starting point. With user-friendly software and a good UI, it can be done, and manual intervention almost always plays a part in model-building.

But it’s not always very quick.

So what about when you’re mapping the Big Beasts of the data world? Your Snowflakes, Oracles and Google BigQuerys? Your MySQLs, SAPs and Salesforces? The list goes on. These platforms and systems can be so vast and complex that relying solely on manual methods isn’t viable.

This is where automatic connectors come in. But not all connectors are created equal.

Let’s take a look at five of the most important attributes that the best data lineage and metadata connectors bring to your work.

1. Depth and automation

The best connectors are supercharged, providing exceptional depth and automation. Instead of just copying schema into a model and cataloging your current systems, they perform deep, real-time analysis to map the architecture of your systems and the relationships that define them.

To illustrate the value of this attribute, let’s take Google BigQuery by way of example; with the right connector, you have the ability to comprehend intricate modifications to the extracted metadata from this popular data warehouse. Rather than merely updating metadata, it analyzes and visualizes the exact fine-grained change within your lineage model, something that’s critical in a data world that never stops moving.

Philosophically speaking, code is metadata. So a good connector will harvest, parse and analyse data flows within complex SQL code so you can understand exactly what’s happening and how data is moving and mutating around your systems.

2. Detecting and recording change

Top-quality automatic connectors harvest and discover metadata, and they identify and document the exact changes to your metadata by comparing it to a previous version before incorporating it so you can assess the impact of fine-grained changes on connected systems.

Harvesting deep-level connections ensures that the business-friendly metadata views you need are backed up by sound information under the surface about the systems you map. This provides a richer understanding of the ‘before’, ‘during’ and ‘after’, which is essential for reliable planning and risk mitigation.

To add more context, if a connector is executing every day, this sort of connector isn’t just copying the schema into your metadata graph, it’s doing a daily comparison – or ‘diff’ – to help you understand the change and merge it into your graph in a safe and conflict-free way. So not only are you synched with the systems you’re connecting to, you’re detecting exactly when and what is being changed.

Pulling the latest structure and that alone, which is what some providers offer, is a very poor cousin of this functionality.

3. Code analysis

Expanding on our first point, a deep-level connector worth its salt will analyze code, ETLs, BI reports, schemas, catalogs, glossaries, dictionaries, data types, data quality issues and more.

This transports the fruits of data lineage modeling into the world of your colleagues in technical departments, who will derive their own insights from these capabilities, adding to the multifaceted in-house understanding of all of your systems.

After all, as part of holistic commercial organisms, the most effective data analysts don’t work in silos.

4. Automating technical lineage

Perhaps the most important attribute is the automation of technical lineage.

The most effective way to deliver this to a user is by parsing ETL logic and SQL code, and linking data flow and transformations to a standard schema.

This enables simple and quick data-flow capture, modeling, and visualization within and between applications.

Quite simply, there are few better ways of automating data lineage than by capitalizing on this method of parsing.

5. Unlimited metadata ingestion

But we’ve saved one of the most topical attributes for last: metadata ingestion. Or perhaps we should say ‘active metadata ingestion’, because, really, what other metadata is worth working with?

With the right setup, you can take advantage of unlimited metadata ingestion through ‘plug n play’ connectors, open API and SDK framework, along with a suite of file import templates.

Companies thrive or fail on the strength of their intelligence.

To build a metadata fabric, all information about data and its usage must be coalesced, linked together and speak the same language. Ingesting this into a common format means you can easily query, analyze and present it.

A data analyst’s work is at the heart of their enterprise’s intellectual heft. Active metadata feeds into this at a fundamental level, and the quicker and more accurately you can ingest this information into your models such that it can be easily interrogated by your colleagues, the better.

The Solidatus way

Other lineage software platforms provide connectors but don’t be fooled into thinking that they all value these attributes as highly as we do at Solidatus or even do them at all. We live by them.

No matter where your metadata sits, we can harvest it.

Whether you’re looking to map data and catalogs, databases, or BI, ERP and CRM tools, we’re the best place to start that journey.

At the time of writing, our connector-count is hovering close to the 60-mark – and rising. Where will it be when you next check our connectors page?

Screenshot 2023 01 26 at 17.18.13

By Tom Khabaza, Principal AI Consultant at Solidatus

For over 30 years, I’ve been helping organizations get more value from their data by using machine learning and data analytics of all kinds. This is about data, yes, but it starts with a business goal, and business knowledge informs every step, so much so that we can define data analysis as the application of knowledge to data. In order to make a useful analysis, the data scientist must use not only knowledge of the data, but also, constantly, knowledge of whatever the data is about, that is knowledge of the business.

Following the recent announcement that Solidatus was named a Representative Vendor in the Gartner® Market Guide for Active Metadata Management, this is the first of four detailed articles on the emerging subject of active metadata. In this blog post, we provide an overview of what it is and the challenges it addresses.

Data and its discontents

Business knowledge is usually found in the minds of businesspeople, so it’s difficult to record, manage and access, and if the data scientist does find a written record of business knowledge, then it’s usually not too suitable for their use, having been created for a different purpose. Data scientists also make use of knowledge that was generated by machine learning, but this is in a machine-usable form, and often not accessible to human examination – machine learning models are usually opaque: they do what they do, but no-one can say what knowledge was used to do it. In any case, the knowledge in a machine learning model is only a tiny part of what is needed to get value from data.

Metadata has the potential and is starting to alleviate these problems by making it easier to record and use both data knowledge and the business knowledge behind it. An application which records the details of the data available to the data scientist, where it can be found, and how it relates to the business, is called a data catalog, and the information it contains is called metadata. This kind of application is invaluable to data scientists, or anyone who has to use data in a complex organization, and several excellent data catalog applications exist.

Metadata – it’s alive!

It is understood less widely that metadata can form more than just a catalog, more than a document in which to look things up. What else can be done with metadata depends on exactly what information is available about the data and about the business processes to which the data relates. If the metadata includes a record of how the data is used, this can be analyzed to improve the use of data – this is the obvious way in which data scientists would use data to improve data science – and this is sometimes called ‘active metadata’. However, metadata can be active in many different ways. If it describes details of how data flows through systems, it can be used to understand the properties of data related to its origin, and to give the organization confidence in downstream systems and data; this is called ‘data lineage’. If the metadata also includes logical information about the data and the business, this can be used automatically to reason and reach conclusions about both the data and the business. All of these properties, plus the dynamic nature of the resultant visualizations, make metadata active, so that it has a direct impact on business decisions, and not only by enabling data analysts.

Active metadata has a curious property, from the point of view of a data scientist. Much of the insight to be gained from metadata comes from treating it as data about individual entities, rather than data about collections, and this makes it easier to visualize, because no aggregation is required. Consider the following diagram, which shows simplified customer and order source system metadata and the metadata for a customer activity report in a data mart:

Blog01 screenshot01

The arrows show data lineage: which source data items contribute to which parts of the report. Data lineage is also metadata, and it’s clear that this kind of metadata is useful if we wish to trace the origins of the report for purposes of trust, or to trace the consequences of proposed changes in the source systems. We might call this ‘technical metadata’ because it describes how pieces of technology fit together, but we can also have ‘business metadata’. Consider the following diagram, which also shows business objects and connections:

Blog01 screenshot02

This business metadata allows us to trace relationships across systems, business objectives and responsible managers, and to spot any mismatch between business and technical responsibilities. Note that in all of this metadata, each entity – each database, each data attribute, each objective and each manager – is treated as separate, and insight is gained from chaining together their unique relationships. This is completely different from traditional data analysis, in which insight is gained from aggregations.

Five business challenges solved using metadata

This ability to combine business and technical metadata is hugely powerful, and can be used to solve many different business and data challenges; here are a few examples:

  • Governance and regulatory compliance: give an organization unified transparency and automated collaboration across its data, processes and legal obligations, enhancing decision-making and enabling trust in a connected environment.
  • Data risk and controls: use an understanding of the flow of data and risk to guide the data lifecycle, retention policies and their privacy implications, in order to reduce both risks and costs.
  • Business integration: get a complete view of the business from both technical and management perspectives, and how these are interrelated, enabling joined-up decision-making and clarifying the consequences of proposed changes.
  • Data sharing: model all of an organization’s data-related regulations and policies and the data to which they apply, in order to test any data sharing requests and speed up the approval process, enabling more effective insight solutions throughout a complex organization.
  • ESG: integrate ESG data into decision-making tools and processes, right-size ESG report automation capabilities to support business growth and evolving regulations, and reduce costs associated with ESG data sourcing, quality assessment, analysis, enrichment and report generation.

All of these are solutions which go beyond localized decision-making; they require a model of the business, its data, the processes that create the data, the controls that monitor it, the policies and standards that guide it, the metrics that measure it and the obligations that regulate it. Furthermore, they require the capability to surface and make visible the results of inference across the business. When metadata provides a rich substrate with integrated inference and dynamic visualization, we call it active metadata.

It’s a whole new world of business improvement and efficiency.

Metadata in action, its business applications, and looking to the future – all yet to come

These are strong claims, and in order to understand and believe them fully, it is necessary to see metadata in action; this will be the topic of the next article in this series. But we need even more than that: we need to understand why metadata can do the things that it does in a business solution, and this will be the focus of the third article, to explain in general terms why metadata offers what it does, and how it can be applied to produce new solutions. The fourth and final article will look to the future: how we can do even more with metadata, and do it more easily, by integrating AI. The future is bright.

Quick Answer: What Is Active Metadata?

In the latest Gartner® report, find out what active metadata is, how to use it, and how to get started.

active metadata blog header scaled

Last month, Gartner® published its Market Guide for Active Metadata Management*.

We were delighted – but not surprised – to see that Solidatus was named a Representative Vendor in this Market Guide report as to us, active metadata is at the heart of everything we do.

But what is active metadata?

Gartner opinion

In the report, Gartner describes active metadata management as “a set of capabilities across multiple data management markets, led primarily by recent advancements in how metadata can be used for continuous analysis. Data and analytics leaders must consider the market evolution as transformational in all data-enabling technologies”.

In our opinion, this is a great overview, and we’d recommend you read the full report. Highlights include:

  • A strategic assumption that “[t]hrough 2024, organizations that adopt aggressive metadata analysis across their complete data management environment will decrease time to delivery of new data assets to users by as much as 70%”;
  • A market direction, which states that, “[o]verall, the metadata management software market grew at 21.6%, reaching $1.54 billion in U.S. dollars. This is one of the highest growing markets within data management software overall, following the DBMS market growth of 22%, although from a much smaller revenue base”; and
  • A market analysis that states that “[c]ollaborative utilization will require new ways to capture and visualize metadata (driven by data preparation for analytics). Included is the capability of rating, ranking, tagging of data and ability to communicate within the metadata solutions”.

But we think active metadata means slightly different things to different vendors.

In this short blog post, a prelude to a series of more detailed blog posts on this increasingly important subject, we summarize what active metadata means to Solidatus and its growing body of users.

The DNA of Solidatus

It took others to identify and name active metadata. But – as with DNA itself, which obviously existed before Watson and Crick discovered and named it in the 1950s – active metadata is, and has always been, in our DNA.

It’s what we do and it’s what underpins our technology, through whichever use case lens you view our data lineage solution.

It starts with metadata itself, which we’d define as a special kind of data that describes business processes, people, data and technology, and the relations between them, bringing context and clarity to the decisions that link them. Traditional examples include data catalog and business glossary.

This brings us to active metadata. We believe our definition resonates with Gartner’s: the way we see it, active metadata is the facility to reason about, visualize dynamically and gain continuous insight from information about data, data systems, business entities and business concepts, the relations between them, and stored knowledge about them.

How we make metadata active

So, what makes active metadata active and why is it so different from what went on before? At Solidatus, we’d answer these questions with four points:

  • Active metadata includes logical reasoning;
  • Active metadata offers a very dynamic form of visualization;
  • The information in active metadata is not just about the entities themselves, but about the connections between them; and
  • Active metadata should include stored knowledge. This is subtly different from other metadata, because it sits at a higher level, and offers more general, or more universal, information, such as business definitions.

The consequence of all of these is continuous insight. It’s more dynamic, it’s more complete, it’s based on context as well as content, and it respects standards.

It’s a whole different ballgame.

We’ll expand on these in future blog posts, but anyone familiar with Solidatus will immediately appreciate how we sit right in the centre of this space.

The wider context

We’ll finish by contextualizing active metadata, at least as we see it, in terms of what’s delivered, the attributes of an active metadata solution, and – crucially – the main areas for which it can be used.

What’s delivered
An active metadata solution:

  • Is embedded within an organization’s data and business practices;
  • Presents a continuous, coordinated, enterprise-wide capability; and
  • Provides monitoring, insight, alerts, recommendations and design.

Solution attributes

An active metadata analytics workflow:

  • Is integrated, managed and collaborative; and
  • Orchestrates inter-platform metadata assets and cross-platform data asset management.

What it can be used for

Active metadata assets are used to create insight solutions which, among other things, enhance:

  • Data integration;
  • Resource management;
  • Data quality management;
  • Data governance;
  • Corporate governance;
  • Regulatory control;
  • Risk management;
  • Digital transformation; and
  • ESG.

Above all, the benefits of good metadata capabilities boil down to: making business information complete, coherent, informed and logical; delivering faster, richer and deeper insight; keeping everything up to date; and making your processes reliable and responsive.

Watch this space for our detailed follow-up blog posts and, as ever, we encourage readers to request a demo of Solidatus.

To read more on this subject, see What is active metadata? and Mining value from active metadata.

*Gartner, “Market Guide for Active Metadata Management”, November 14, 2022.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Quick Answer: What Is Active Metadata?

In the latest Gartner® report, find out what active metadata is, how to use it, and how to get started.

photo 1505664194779 8beaceb93744

Solidatus Co-CEO Philip Dutton recently caught up with Baz Khuti, President US Modak, about the difference between data and metadata, data lineage, and its role in the cloud space.

First up in the discussion was the somewhat grey area of the difference between data and metadata. A fan of analogies, Philip drew the perfect comparison:

“The Library of congress has 164 million items. How do you find the information you’re looking for? You do that by going to the index which points you to the right area. That is the metadata – describing where it is. And what it is. It gives us more classification and categorisation to help us determine if this the information we are looking for. Metadata gives the ability to find and understand the data you are looking for. If we move from a library to the internet, we can see that in this instance the data is the internet. We have trillions of webpages. How do you find what you are looking for? You go to a search engine which has indexed the metadata. That really is the crux of what metadata is – it’s the description of the data we are looking for.”

Philip and Baz went on to chat about important key points regarding lineage and cloud space:

  • The role data lineage plays in metadata – it is all about connectivity. Lineage is the hyperlink between documents, the connection between data sources
  • The opportunities and challenges of moving to the cloud – it generates infinite possibilities, but it’s important to not replicate mistakes from the past
  • Cloud gives us greenfield opportunities to build better by default and by design – but we must first understand our data before it is moved

Watch their interview for CDO Magazine on-demand now to learn more and look out for the next segment on regulations and the responsibilities organizations bear when it comes to compliance.

Blog banners

“Metadata is data about data. Metadata gives the specifics of what data is and how you should use it.”

Solidatus Chief Innovation Officer Philip Miller caught up with the A-Team alongside panellists Michael King From BNY Mellon, Edgar Zalite from Deutsche Bank, Tara Raafat from Bloomberg and Mark Etherington from Crux to discuss the evolution of metadata and the benefits of a successful metadata management project.

“It comes down to lineage,” Philip Miller comments, “am I getting the right data into the right field and using it for the right purpose?” Without understanding lineage, businesses can get into a lot of trouble and metadata is the key to understanding lineage in terms of what it can deliver and getting the best results possible. 

But what else did we take away from this webinar? Here are our key learnings: 

  • When it comes to best practice, understand you have a challenge. Get the right information out of the heads of your SME’s and into a repository
  • In order to move towards automated metadata management, organisations need to change work practices now to be better off long-term by planning projects properly by design and default
  • Catalog – businesses need to understand what needs to be understood. Part of that is cataloging data, profiling it and defining taxonomies that fit
  • Technologies need to be accessible – enterprises need to be able to grasp technologies and take ownership of it in order for the right people to have their fingerprints on metadata

To find out more, watch the webinar ‘Best practice for metadata management’ on-demand now:

Play Video
iStock 655782604 scaled

Solidatus Co-Founder Philip Dutton recently caught up with the team at Tech Company News to discuss the origins of Solidatus, how our solution is transforming businesses on a global scale and where the future lies for us as a business. 

“Our ambition is not only engineering a very necessary piece of software, but also to build a company that is collaborative, inclusive and forward thinking – a tech company of the future in terms of our ethos as well as our product.”

In this profile of Solidatus with Tech Company News, Philip Dutton explores how our lineage-first data management software allows organisations’ data to be efficiently managed through visualisation, as we play a crucial role in enabling the world’s largest data-rich and regulated organisations to effectively manage their data, people and processes.

With our eyes now fixed on global expansion, Philip also gives an insight into what our current focus is as a business – from opening offices in the US to rolling out our industry-first ESG data lineage methodology in response to the growing push for companies worldwide to become more environmentally and socially responsible. 

London Connections scaled

There is an ever-increasing number of projects related to data, making it easy to slide into common traps resulting in mistakes that undermine the business behind it. On this panel at Future Processing’s ITInsights event, we discuss the most common data challenges we see in our industry and how to overcome them.

With so many organisations still not reaching their full data potential having tried to bring it to the centre of their business, we have to look at what is hindering this journey: data complexity being the biggest among these challenges. Data and technology has evolved exponentially over the last 2 years, with the development of a huge volume of new systems adding to this complexity. It has become an ecosystem that we must learn to understand and interconnect.

What solutions like Solidatus are trying to do is help banks understand their data. If businesses want to be able to drive a model forward or achieve regulatory compliance, they have to understand their data. By understanding your data, and ascertaining that you need to have it as a reusable asset, you can drive your ability to obtain market and customer insights. Once understood, you can control your data and gain value from it.

Together with Julien Cousineau, CTO & Founder of Flinks and Enzo Casasola Head of RegTech at Suade Labs, Solidatus Co-Founder Philip Dutton goes through the most popular pitfalls to avoid, the most relevant lessons learned and tips on how to tackle these challenges.

Check out the FinTalks: After Hours discussion panel – Most Common Data Mistakes to learn more: