silwoodtechnology

Blogs and comments about metadata, SAP, Oracle and Information Management
Skip to content
  • Home
  • About
  • Connections
  • News
  • Partners
← Older posts

Ever wondered how you could create an Oracle eBusiness Suite General Ledger Data Model?

Posted on July 1, 2015 by silwoodblog

This is the latest in an occasional series of data models which will give some insight into those common business areas in SAP and ORACLE applications for which data models are often required for Information Management projects.

It’s a group of tables in the form of a data model from an Oracle Enterprise Business Suite system showing a set of General Ledger tables and relationships.

Example of Oracle eBusiness Suite General Ledger data model

Example of Oracle eBusiness Suite General Ledger data model

You can download it here

The model was created following some analysis in Safyr and then exported.

I’m posting it as an image built using Safyr’s own redesigned ERDiagrammer tool which comes with the latest and greatest version of Safyr – 6.3.

I can easily make the same set of tables available in ERwin, ER/Studio or PowerDesigner formats – please feel free to ask if you would like to see it in one of those formats.  There is no obligation and I’d be happy to provide.

The usual problem when creating any model (apart from making it look nicely laid out!) is to decide where to stop.   How many tables should I include?   There are literally hundreds of tables in EBS that are ‘General Ledger’ tables and so the trick is deciding which are relevant to the job in hand.

This is a particular view of some of the tables which are part of the eBusiness Suite General Ledger data model.  I could have made it larger or used different tables, perhaps from a different start point or search term to meet a different business need.  The result is a static model reflecting what I was searching for at the time.

If I needed to find an alternative group of tables I would have been able to achieve that quickly and easily using Safyr and it would have picked up any customisations to the system.

That’s what Safyr is all about – enabling you to scope the set of tables that are relevent to your project – and of course you and your team are the best people to decide that. Safyr gives you the power to work it out yourself without needing to involve specialists or external consultants.

As we are providing something which is usable in some contexts… here is the legal bit. In no event will Silwood Technology Ltd be liable for any indirect, special or consequential loss arising out of or in any way relating to the use or performance or the use or misuse of the information presented in this model.

You can find other example models from other ERPs in earlier Blog posts.

Please let me know what you think.

Nick Porter
Technology Director

Posted in Data, Data Management, Data MOdel, Data Model, Enterprise application, Enterprise Information Management, Enterprise Metadata Management, Metadata, Metadata discovery, Oracle, Safyr, Uncategorized | Tagged Data Discovery, data modeler, data models, General Ledger, metadata, Oracle eBusiness Suite | Leave a comment

SAP “Run Simple” is great for the future, but what about the complexity of the past?

Posted on May 26, 2015 by silwoodblog

This is an interesting blog by Steve Lucas, President of Platform Solutions at SAP.

https://blogs.saphana.com/2014/11/12/a-roadmap-for-simple/

The first paragraph reads:

“Our CEO, Bill McDermott, has thrown down the gauntlet on enterprise complexity so to speak and issued a challenge to every single SAP employee: Make our valuable products and solutions simple to understand, simple to discover, simple to deploy, simple to use and simple to support. Do that, he promises, and our customers will be successful. There is no more noble pursuit for SAP than that as far as I am concerned!”

In a way this blog in 2014 heralded the announcement in February 2015 of the launch of S4/HANA, a simplified platform which will ultimately include all the business functionality and industry models currently supported by SAP Business Suite running on HANA either on premise or in the cloud.  The application will run much faster, have simpler processes, easier customisation and a much reduced size data model.

A new start

S4/HANA – a new start

Of course not everyone will be able to benefit from this immediately, not least because the first available application on S4/HANA is Financials, with Logistics to follow later in 2015.

Until they do migrate, the 25,000 or so customers who run their businesses on SAP’s ERP and CRM platform (SAP Business Suite) will continue to have to cope with the intricacy of their applications in terms of both process and data.

Some of those of course will be organisations who, in any event, wish to remain on their current database platform from Oracle, IBM or Microsoft at least for the foreseeable future.  For them the complexity will remain and they are supported for at least the next 10 years.

I guess as well that some SAP customers may reach the conclusion that, before deciding to continue to commit to the SAP platform and the process of moving to S4/HANA, it would be a good time to evaluate alternatives, such as Salesforce and Workday, especially if a move to SaaS and Cloud generally is perceived to be beneficial.

The migration to S4/HANA process is I understand two phased.  Firstly the database itself needs to be migrated to HANA and then the second phase involves installing the S4/HANA application.  Alternatively one could migrate SAP Business Suite to HANA and then look to upgrade the Financials application to S4/HANA although I am not sure how that then works with the other modules (CRM, SRM, etc.)

For organisations who have multiple instances of SAP running and who may be considering the move to S4/HANA as a route to application rationalisation, there will be the question of whether it is necessary to consolidate those applications and their data on their standard RDBMS platform first and then migrate to HANA and then finally to install the application.  That can be a significant challenge in itself.

Given that most organisations do not only have SAP as a supplier, there are other aspects of such a major project to consider, for example, data and application integration, master data, enterprise data warehouse as well as governance and compliance.  All of these are heavily skewed to having a reliance on an understanding of the SAP data landscape.

It is well known that the data model underpinning an SAP instance runs into over 90,000 tables before customisation and that this is further complicated by pool, cluster and transparent tables types.

Finding your way around the complex web of data and processes in SAP systems can be daunting

Finding your way around the complex web of data and processes in SAP systems can be daunting

Add to that, the fact that the logical names, descriptions and relationships between tables are not held in the physical database System Catalogue and it becomes obvious why so much time and money is spent in merely trying to navigate the data model to find what is needed for an Information Management project (Data Warehouse, Data Migration, Integration, Master Data, Governance etc.)

Whilst it is possible to make the links between say, SAP transactions or programmes and data tables, it is extremely difficult to link data to process which means even more time and expense is required to understand the impact of changes or of upgrades.

So how can those companies who, for whatever reason, continue to use the traditional method for storing data and the SAP R/3 based application packages look to get more out of their investment in these critical systems of record?  Well, there are a number of SAP partners who provide solutions which can assist with this objective.

Naturally we at Silwood Technology would suggest looking at product and solution vendors who specialise in making SAP easier to work with.  On the data/metadata side, our product Safyr for SAP can help – not to simplify the data model itself – but to make it navigable and searchable to enable effective and rapid scoping.  This accelerates the delivery of crucial Information Management projects and increases IT staff productivity and agility.

For application and process based information about SAP, IntelliCorp has a comprehensive set of tools which aid smart Application Lifecycle Management, provide visibility into Business Processes and enhance Data Integration.

For more information about how Silwood Technology can deliver a greater level of understanding of SAP data by providing visibility into the complex web of SAP tables please visit our website.

Posted in Business Process, Data, Data Management, Data Model, Enterprise application, Enterprise Information Management, Enterprise Metadata Management, Metadata, Metadata discovery, S4/HANA, Safyr, SAP, SAP BW, SAP HANA, Uncategorized | Tagged Data Discovery, data model, metadata, Safyr, SAP HANA | Leave a comment

Why do Salesforce applications need Safyr to help the understanding of their metadata?

Posted on May 11, 2015 by silwoodblog

We’ve been surprised (pleasantly I might add) at the response we have received to the introduction of Safyr’s capability to reverse engineer metadata from Salesforce systems.

Finding the right metadata can be like finding a needle in a haystack

It is rapidly becoming our second most popular ‘package’, SAP of course being number one.

Why should this be?   All the other environments we support (SAP, JD Edwards, Oracle eBusiness Suite etc….) have thousands of tables and tens of thousands of attributes, whilst a standard Salesforce system is around 250 tables.   That’s less than 0.3% of the number of tables in a typical SAP system!

In addition, Salesforce has its own data modelling capability (called Schema Builder), allowing the developer to choose which tables to show in a diagram.   As with most things in Salesforce it’s easy enough to use.  Given that a lot of Safyr customers use Safyr in conjunction with a modelling tool like ERwin, ER/Studio or PowerDesigner, the presence of a data modelling capability in the Salesforce development environment would seem to negate the need for Safyr especially given the relatively low number of tables.

And yet we are selling Safyr to Salesforce customers and have a growing level of interest in that version of our product.

So, where is the problem with accessing and understanding Salesforce metadata?

Well, firstly what we’ve learned from customers and partners is that it’s not quite as ‘black and white’ as it first appears and there are a number of factors which come into play.

Numbers of Tables

Yes, it is true that a standard Salesforce complement of tables is around the 250 mark.  However as customers look to the product to fulfill more of their business requirements it is becoming normal to extend the standard set of tables by adding their own and/or to buy third party packages (such as Rootstock and Financial Force), which are based on the Force development environment.

The impact of this is to dramatically push up the numbers and we’ve seen customers who have systems that are around the 1,000 table mark.

Now that’s still small by SAP (or even Siebel) standards, but imagine having a data model of 1,000 tables – that is moving into a world where much more rigorous understanding and control of the data design is essential to the effective delivery of information management projects.

Data Model

As mentioned earlier, the Salesforce schema builder can be used to model tables in the Salesforce environment.   One of its problems is that it doesn’t allow any table to be selected, just a sub set.  Things like Opportunity and Contact can be selected and represented in a diagram, but there are a lot of the more minor tables that just don’t appear in the list of available tables.

A case in point is an intersection table that links Opportunity and Contact – it’s called ‘Opportunity Contact Role’ and as its name implies links the two tables together with the primary keys of the two ‘parent’ tables being foreign keys on the intersection table (see Figure below).

Safyr ERDiagrammer model of the Opportunity Contact Role intersection table

Safyr ERDiagrammer model of the Opportunity Contact Role intersection table

And whilst the Schema Builder is very easy to use, the diagrams cannot be saved.   There is also no concept of grouping tables into areas of interest (a ‘Subject Area’ in Safyr terminology) which would allow you to save different representations of key tables in manageable groups.

Numbers of Salesforce Instances

So far we have more tables than we might have initially thought, a less than ideal modelling environment and on top of that, many customers have multiple Salesforce implementations.

We already have customers who are multi-national organisations having a different Salesforce system in each country (or region).   Imagine 10 different Salesforce and Force applications, all of around 1,000 tables, all different.

This is beginning to look like a significant data management headache.

Salesforce has become a critical enterprise application

Perhaps the most striking feature which results in customers seeking out a solution to the growing data complexity of Salesforce and Force applications is the fact that these systems are now mission critical for many companies, large and small.

This means that organisations also need to maximise their investment in them by ensuring that they become a fundamental part of their IT and data ecosystem – by sharing data through integration to streamline processes and reduce cost, by providing data to data warehouses for analytics and business intelligence applications to improve reporting and decision making, or to meet data governance obligations.

If this is in fact the case, then having a solution which helps to meet the challenge of understanding the metadata in these applications will be vital.

More information about Safyr for Salesforce is available at our website

Posted in Data, Data and Application integration, Data Discovery, Data Governance, Data Management, Data MOdel, Enterprise Information Management, JD Edwards, Metadata, Oracle, PeopleSoft, Safyr, Salesforce, SAP | Tagged accurate data, Business Intelligence, business process, Data Discovery, data model, Data Profiling, Data Warehouse | Leave a comment

Using Safyr ‘ ETL for Metadata’ to harvest Microsoft Dynamics metadata

Posted on March 24, 2015 by silwoodblog

As many of you will know, using Safyr we can pull metadata from a range of ERP and CRM products, including SAP, Salesforce and PeopleSoft.

Customers often ask about other packaged applications that they have implemented.  “Can you also do XYZ product?”, and our answer used to be “No – we can only do the fixed set of products we address”.

That all changed a little while back when we added what we call ‘ETL for Metadata’ to Safyr.  This is the capability to map the metadata in the required package into the Safyr repository structure.  It requires a little scripting work, and this capability is covered by the new feature.

Obviously there needs to be ‘useful’ metadata in the package in order for us to add value (in other words we cannot make something from nothing).  In particular, the package really needs a definition of relationships somewhere in its metadata layer, because that is one of the most useful navigation methods.

Tables from Microsoft Dynamics harvested by Safyr, using ETL for Metadata technology and techniques

Tables from Microsoft Dynamics harvested by Safyr, using ETL for Metadata technology and techniques

We’ve used this capability to address Microsoft Dynamics CRM.  The picture represents a model of a group of ‘Dynamics’ tables, represented in the Safyr ERDiagrammer that shows what the result is.   Building the ETL for Metadata capability for this took about 5 days.

The great thing about the approach we have taken in building the ETL for Metadata capability is that once the mapping is done, it will work with any package of that type, so in this case any Microsoft Dynamics CRM system.  It also retains one of the main advantages of Safyr which is to retrieve customisations to the application’s standard data model.

I can provide the model in other formats (ERwin, ER/Studio, PowerDesigner etc.) if required – just let me know by commenting on this post.

If you are going to Enterprise Data World in Washington DC next week (29th March to 3rd April 2015), please come and find us and ask to see how Safyr harvests metadata from this and other packages.

There is more information about Silwood Technology and Safyr here

Nick Porter
Technical Director
Silwood Technology

Posted in Data, Data Model, Enterprise application, Enterprise Information Management, Enterprise Metadata Management, Metadata discovery, Microsoft Dynamics, Uncategorized | Tagged applications, Data Discovery, data model, Enterprise Information Management, metadata, Microsoft Dynamics, Safyr | 2 Comments

Modelling Salesforce application tables with Safyr and Microsoft Visio – a worked example

Posted on February 24, 2015 by silwoodblog

This is the latest in an occasional series of data models based upon the packaged application we support with our Safyr product.

Many of our customers and partners are increasingly using Salesforce as an enterprise system of record and of course with the addition of customisations and applications the data model is increasing in size and complexity, meaning that it is becoming more difficult to manage, maintain and understand.

salesforce safyr

Main Salesforce application tables displayed as Visio model

What I’ve produced in Safyr this time is a model of the Key Salesforce tables and exported it for use in Microsoft Visio.  If you have Visio you can click here to see the model: Microsoft Visio model of key Salesforce tables (Account, Contact, Opportunity….)

The Salesforce Schema Builder is a great feature, but not wonderful for larger models, so this shows an alternative approach.   The model could have been created just as easily in any of the export formats we support (ER/Studio, ERwin, PowerDesigner….) but most people have Visio so it’s an easy option for this kind of exercise.

Note that you will need Visio 2010 or earlier as Microsoft removed some very useful functionality to do with Data Models in Visio 2013.    So with this in mind, and for those of you who do not have Visio, the model is also available for download here as a jpeg.  If you would like to see the same data in other modelling formats (for ERwin, ER/Studio, PowerDesigner etc.,) please let me know.

As we are providing something which is usable in some contexts… here is the legal bit. In no event will Silwood Technology Ltd be liable for any indirect, special or consequential loss arising out of or in any way relating to the use or performance or the use or misuse of the information presented in this model.

The model was produced in seconds using Safyr, but then one could spend days making the layout look good!   So no apologies if it’s not ‘aesthetically pleasing’.  You can find more information about Safyr for Salesforce here

I hope the model is useful.  Please let me know what you think.

Nick Porter
Technical Director

Posted in Data, Data and Application integration, Data Discovery, Data Model, Enterprise Metadata Management, Metadata discovery, MIcrosoft Visio, Safyr, Salesforce | Tagged Cloud integration, data, Data Discovery, data model, Enterprise Information Management, metadata, Salesforce, visio | Leave a comment

“So what your product, Safyr, does is Data Discovery – Right?”

Posted on December 18, 2014 by silwoodblog

Well…er…Yes, and No.   We get this comment lots of times.  It’s sometimes hard to explain the difference.  Strictly speaking, we do ‘metadata discovery’.

I like analogies.  Imagine I’ve moved to a new town and I want to know the location of the nearest Supermarket, Post Office, ATM, etc.   What do I do?

I can ask people who know or I can walk or drive around in my car and see where things are or I can look at Google Maps. Note: other Digital Map providers are also available 🙂

Walking or driving around hoping to find what you are looking for can be time-consuming and expensive

Walking or driving around hoping to find what you are looking for can be time-consuming and expensive

Each has their advantages and disadvantages.  Asking people can often get you the quickest answer, but of course they have to be on hand, and you may have to filter their advice (“what’s the best restaurant around here?” might not give you the answer that will help if you’re on a budget or do not like their favourite type of meal).

Walking or driving around will eventually get the whole picture in your head, you might find what you are looking for and you will see other things on the way – but it can be time consuming (and expensive).

Google Maps lets you work out where things are ‘virtually’.  You can find things at a dramatically quicker rate than actually driving around and it will provide directions.  It’s no replacement for the real thing, but if the area to be explored is large, then it’s going to save a lot of time.  And of course if the area was very large, say a whole country, driving around would be impractical.

The analogy doesn’t fully work, but data discovery is similar.  Imagine we are trying to do some Data Discovery on applications such as SAP Business Suite, or PeopleSoft, or Oracle eBusiness Suite…  First step might be to ask someone who knows.  “Where can I find the Purchasing Document information in the ERP?”   And if you ask the right person, they may know.  But you have to keep asking for each new enquiry.  You might also try to find the documentation if it exists or you could use ‘informed guesswork’ and trial and error.

Before I go too much further, perhaps I should mention that in Information Technology parlance there are at least three uses for the phrase Data Discovery.

Analytics and Profiling are two types of Data Discovery

Analytics and Profiling are two types of Data Discovery

Firstly there is Data Discovery for Analytics, whereby the someone make use of tools such as Tableau, Qlik, Panorama and a whole host of others to try to turn raw data from one or more sources into insight and actionable information for solving business problems.

Secondly there is Data Discovery for Quality assessment and management, this typically involves using tools such as Trillium, DQ Global and others to scan the actual data in the database with the objective of working out how it hangs together by looking for unique identifiers, duplicates, missing mandatory fields, potential Foreign Keys, etc.

Although they are targeted at different uses and business challenges, both of these types of Data Discovery can give insight and understanding where none existed before.  In fact it may be that one might use a Data Discovery for Quality tool prior to loading data into a Data Discovery for Analytics product so as to avoid decisions being made from inaccurate data.

Many of these products are highly effective and proficient and produce high quality results.

Sometimes however using them with Enterprise systems such as those from SAP, Oracle and increasingly Salesforce is simply not practicable because of the sheer number of database Tables and the complexity of the data models underpinning those applications, which are also hidden from casual viewing.

So in much the same way that is it simply not practical to drive round every road in a large city looking for a particular store there needs to be a better more effective way of finding what you need for the Quality and Analytics Data Discovery tools to work from.  As an example and as we’ve explained on this blog before, SAP has 90,000+ tables and a very complex and mostly obscure data model.  Therefore it is not realistic to profile or analyse an entire system with terabytes or petabytes of data spread over so many ‘buckets’ and it is very difficult and time consuming to find what you are looking for.  And that’s where Data Discovery for Application Metadata (or perhaps Source Data Discovery ), which is what we do comes in.

Just as a digital map helps you find what you are looking for so Metadata discovery radically accelerates the process of locating the tables in your Enterprise packaged applications

Just as a digital map helps you find what you are looking for so Metadata discovery radically accelerates the process of locating the tables in your Enterprise packaged applications

Metadata Discovery allows the user to ‘scope’ the likely set of tables and relationships that are required.  Its similar to data discovery, but rather than doing that on the data, its doing it on the metadata.   And whereas Data Discovery for Quality is about the actual rows and columns in the database, and Data Discovery for Analytics is about the content of those tables, rows and columns, Metadata Discovery is about providing the context of that data:  “Where is the Customer Master data stored?” and “Which Tables are used by the Customer Payments function?” so that those other Discovery solutions can be more effective on large, complex and customised applications.

So what we do is like ‘Google Maps’ for ERP (meta)data.  It provides a practical mechanism for working out where things are stored in the ERP from the comfort of your own PC.  Take a look at our website for more information

Nick Porter
Technical Director
Silwood Technology Limited

Find me on LinkedIn

Posted in Data and Application integration, Data Discovery, Data Model, Enterprise application, Enterprise Metadata Management, JD Edwards, Metadata discovery, Oracle, PeopleSoft, Salesforce, SAP, Siebel, Uncategorized | Tagged Analytics, Data Discovery, data model, Data Profiling, data quality, metadata | Leave a comment

The role of application metadata in a Master Data implementation

Posted on December 10, 2014 by silwoodblog

We would like to congratulate IBM, Informatica, SAP and Oracle on their continued strong showing in the latest Gartner Magic Quadrant for Master Data Management (MDM) of Customer Data Solutions.

I am not going to spend time restating the benefits that organisations can reap from a properly implemented Master Data strategy in terms of increased revenues or reduced customer churn through better knowledge of the customer, the ability to offer enhanced service, reduced costs and improved decision making.

Having complete dataq about a patient, no matter where it comes from can be critical

Having complete data about a patient, no matter where it comes from can be critical

Just think of your own experiences with your Bank, Telecommunications or Utility provider – how well connected are the various systems that you touch, or touch you when you interact with them and what could they do to improve it?  That will give you a clue as to the advantages and importance of a ‘single view of the customer/citizen/patient’ etc.

 

I would however like to draw your attention to an important aspect of all Master Data Management projects, irrespective of vendor solution, and which itself is considered to be vital to their success by Andrew White of Gartner.  Read his blog on MDM and Metadata Management here.

This is the topic of metadata, an underlying architecture consideration which I believe is often missed or understated in terms of importance when looking at MDM projects, tools or methods both by vendors and analysts. I would also stress that by metadata I do not, in this context, mean the metadata created within the vendor product and which helps them to manage the data flows through their products.

I mean the metadata (tables, fields, descriptions, relationships and customisations), sometimes called the data model, which underpins the source applications which are to be incorporated in the Master Data solution.

Gartner defines Master Data Management of Customer Data Solutions as being software products which:

  • “Support the global identification, linking and synchronization of customer information across heterogeneous data sources through semantic reconciliation of master data
  • Create and manage a central, persisted system of record or index of record for master data
  • Enable delivery of a single customer view to all stakeholders, in support of various business benefits
  • Support ongoing master data stewardship and governance requirements through workflow-based monitoring and corrective-action techniques” (Gartner, 2014)

Their document also references the differences in approach that can be taken to implement a solution: “The instantiation of the customer master data, ranging from the maintenance of a physical “golden record” to a more virtual, metadata-based, indexing structure.”

There are specific requirements mentioned about the need to model complex relationships between source applications, to express data in terms of logical data models with associated metadata, to provide business consumable metadata as part of the management of data, business rules and sources.

In summary there is an acknowledgement of the role of metadata, however I would argue that an understanding of how the source applications which are to be incorporated into any Master Data Management systems are structured in terms of their data model is a fundamental requirement.  If you don’t know where the tables (metadata) are that represent for example, “customer” in your source applications and how they are related to other tables and might have been customised, then it is impossible to be sure that you are using or updating the right data.  Also if you do not have easy access to the metadata information in source applications then your project could be delayed, go over budget or under deliver as you struggle to find it.

MDM vendors typically refer to metadata in the context of their own solutions and it is indeed important that they have such a concept as it makes the implementation and management of their software more effective.

However, the task of finding the metadata from the source or ‘spoke’ applications and mapping to internal metadata is vital and of course the time it takes to do this will depend on the nature of the applications to be integrated with the MDM solution.

For some applications this job will be very easy and straightforward.  Reverse engineering a database or application with a small number of tables and then mapping them to the internal metadata will be a relatively simple task which can often be accomplished manually.

Man Adrift in tiny baot in binary oceanDoing the same job for a large, complex and usually customised packaged software application, for example from SAP, Oracle or others, can be an arduous and difficult task, prone to delay and error.

We know of no MDM vendor whose products have effective tools for reverse engineering the metadata from these systems and allowing data analysts and architects to easily locate what they need for their solution.  Typically their solutions work from prebuilt templates or just lists of tables which require the user to know what they are looking for before they start.  Another alternative is to engage consultants or pull applications specialists into the project – adding to the cost and resource needed for the project.

My argument is that by taking due account of the metadata foundations which underpin the application data and developing a real understanding of it, the MDM solution would be better positioned to deliver projects that meet time, budget and quality project targets.

This would involve the more tightly integrated use of modelling and metadata discovery tools and methods to complement the product suites currently delivered by those in the Magic Quadrant.

London UndergroundMediumTrying to find your way around the data model for an ERP or CRM application without some form of accurate guide is a bit like trying to navigate from one part of London to another using the Underground without a map which has details of the stations, lines and intersections.  If you know the route and which stations to change at then it is easy.  If you have no map then you can either try to find someone to ask, or guess, or use trial and error to find your way.  A map makes it easier and quicker to find where you are going.

Use a map for your metadata and understanding the data model for large complex applications in the context of your project becomes more straightforward and faster

 

Roland Bullivant
Sales and Marketing Director
Silwood Technology Limited

Find me on LinkedIn

Take a look at our website: www.silwoodtechnology.com

Posted in Data Governance, Data Model, Enterprise application, JD Edwards, Master Data Management, Metadata discovery, Oracle, PeopleSoft, Salesforce, SAP, Uncategorized | Tagged data model, master data management, MDM, metadata, Oracle, SAP | Leave a comment

Application data model examples

Posted on December 1, 2014 by silwoodblog

We’ve had quite a bit of interest in our ‘example’ data models from ERPs over the last few months.  A number of people have said that they don’t have a modelling tool easily accessible and therefore can’t see what the diagrams look like, which is a fair point.

With that in mind, you will see two images here, one of a JDEdwards Accounts Payable system as a PowerDesigner Data Model, the other a set of Salesforce tables as an ERwin data model.

Salesforce Opportunity data model in CA ERwin format

Salesforce Opportunity data model in CA ERwin format

JD Edwards Accounts Payable Model in PowerDesigner Format

These give an idea of the kind of Entity-Relationships models that can be made with Safyr, based upon metadata drawn from the respective application.

JD Edwards Accounts Payable Model in PowerDesigner Format

JD Edwards Accounts Payable Model in PowerDesigner Format

The first data model we did in this series was a SAP ‘Bill of Materials’ model – so you might ask why that’s not here as an image too?  Short answer is that it’s too large to show easily in a single picture.

In my view it also provides evidence of the need for tools which allow the user to quickly locate the tables they need in the context of their project because trying to visualize an entire JD Edwards, Salesforce or for that matter SAP data model is totally impractical.

Nick Porter
Technical Director

Posted in Data MOdel, Data Model, JD Edwards, Safyr, Salesforce, Uncategorized | Tagged Data Discovery, data model, metadata, Salesforce | 1 Comment

Application metadata in an Agile Data Warehouse world

Posted on November 26, 2014 by silwoodblog

I have been reading quite a lot recently about the applicability of Agile methodologies for Data Warehousing amongst other initiatives and it appears to me that often the topic of source data analysis is under-represented in the literature and blogs.

In the Agile Manifesto (http://agilemanifesto.org) published in 2001, much is made of the need for delivering solutions which satisfy the business user (customer) in a timely way, incorporating change, frequent delivery, collaboration between the business and IT and much more.

In an excellent piece written by Kent Graziano (Data Warrior LLC, http://kentgraziano.com/) for Oracle Scene recently, he reflects on the 12 specific principles of the Agile Manifesto in the context of making delivery of Data Warehouses faster and more effective.

However in amongst the discussion about data modelling for a data warehouse, possibly using the Data Vault method, changes to ETL coding and the warehouse itself due to requirement changes there is little attention paid to the originating applications which provide the raw data for the warehouse itself.

In an Agile world it is important to be able to deliver small and often, to be able to cater for changes due to user needs, business imperatives and regulatory requirements. Using approaches such as Data Vault and modern model driven data warehouse design solutions (such as that provided by BiReady) and ETL tools the IT department or project team has a good chance of succeeding in delivering effective sources for BI, Data Discovery and Analytics tools.

Knowing where to find what you need is important and may require some agility

Knowing where to find what you need is important and may require some agility

In an Agile world I would maintain that it is also critical to have a way to understand the data models (metadata) which underpin the ‘systems of record’ from which a great deal of the data for the Data Warehouse is extracted. Obviously profiling that data to discern its quality and suitability for the end user is important – however one needs to know where the data is first. An Enterprise Data Model could potentially fulfil this need as long as it contains accurate up to date information about the data sources although these are rare and not easy to build and maintain, especially if the sources include large strategic COTS packages.

Whether as part of an Enterprise Data Model or not, understanding the metadata of many of these packaged systems often presents a significant challenge even for seasoned application specialists. They, the COTS packages, were simply not developed with this type of activity in mind so when a data warehouse team is faced with trying to find data in packaged systems from vendors such as SAP and Oracle they regularly experience delays and rework. This is because the data models for these systems are so big, complex and opaque that navigating around them find to the right tables is difficult and time-consuming and, in the absence of tools from ERP and IM vendors, has traditionally been a manual or at best partially automated task.

As an example, in an SAP system they would encounter a data model which typically runs to over 90,000 tables and has been customised. In itself that might not be a problem if the System Catalogue held the logical names and descriptions and also the relationships between tables – but it doesn’t so looking there or trying to reverse engineer that information is of little or no help. Trying to figure out, for example, the meaning of a table called T077D, which includes columns FAUSV and FAUSF, and how it is joined to other tables could be a bit tricky and take a long time. I would suggest that this approach cannot possibly support an Agile method.

An alternative to the manual method is to use a metadata discovery tool which automates the process of reverse engineering the metadata, including logical names, customisations, relationships, views etc from the application’s Data Dictionary tables in a matter of minutes or hours.

This will then give the data warehouse designers, architects and modellers an interface into that information so that they can find and use the tables they need quickly and easily even after additional requirements are identified or rework needed and would enable an Agile approach for the whole project life-cycle.

SAP Table T077D showing columns and related tables (only those with data)

SAP Table T077D showing columns and related tables (only those with data)

By the way, in case you were wondering T077D is the Customer Account Groups table in SAP.

Two of its columns are FAUSV (Sales Data) and FAUSF (Company Code Data). The table has 30 Child Tables (in our system only 17 of which contain data) and 2 Parent Tables.

 

 

Roland Bullivant
Sales and Marketing Director

Posted in Agile, Data, Data MOdel, Data Warehouse, Enterprise Metadata Management, JD Edwards, Oracle, PeopleSoft, Salesforce, SAP, Siebel, Uncategorized | Tagged Agile Manifesto, Agile methodologies, Data Warehouse, metadata | 1 Comment

How to find Data Models for SAP BW InfoCubes

Posted on November 3, 2014 by silwoodblog

This is the third in a series of data models drawn from the various ERP and CRM systems for which Safyr delivers rapid, accurate and usable metadata.  Previously on this blog we’ve produced a Bill of Materials model from a SAP system and an Accounts Payable model from a JDEdwards EnterpriseOne 9.1 system.

SAP BW InfoCubes are complex and often difficult to understand at the data model level without specialist knowledge.

SAP BW InfoCubes are complex and often difficult to understand at the data model level without specialist knowledge.

This model is a bit different – it’s a Cube from a SAP BW system.  In the terminology of SAP it’s an InfoCube, and this one is the ‘Demo’ cube that comes pre-packaged with the SAP BW system, and as its name implies, is for demonstration purposes.  Finding the data model behind an SAP BW InfoCube is often seen as a challenge if you are not familiar with the structure.

The model was constructed by reverse engineering a BW system into Safyr, and then choosing the Demo cube as the basis for making a model.  Safyr for BW makes it easy to select all the relevant ‘component’ tables of a Cube to be included in the model.

The whole process of choosing a Cube and exporting to our own ER diagrammer visualisation tool or to a modelling environment takes about 5 minutes.

It’s available below in PowerDesigner and ERwin formats. I can make it available in other formats for anyone that doesn’t have these tools – just drop me a message via the comments box stating your desired format and I’ll see what I can do.

Here is the legal bit. In no event will Silwood Technology Ltd be liable for any indirect, special or consequential loss arising out of or in any way relating to the use or performance or the use or misuse of the information presented in this model.

Here is the link to the download of themodel in PowerDesigner format: PowerDesigner Cube Model

Here is the link to the download of the ERwin model: ERwin Cube Model

I hope they are useful, please let me know what you think.

Nick Porter
Technology Director

Posted in Data, Data MOdel, Data Model, Enterprise Metadata Management, Metadata discovery, Safyr, SAP, SAP BW, Uncategorized | Tagged Analytics, BI, Business Intelligence, data model, InfoCube, SAP BW | Leave a comment
← Older posts
  • Recent Posts

    • Ever wondered how you could create an Oracle eBusiness Suite General Ledger Data Model?
    • SAP “Run Simple” is great for the future, but what about the complexity of the past?
    • Why do Salesforce applications need Safyr to help the understanding of their metadata?
    • Using Safyr ‘ ETL for Metadata’ to harvest Microsoft Dynamics metadata
    • Modelling Salesforce application tables with Safyr and Microsoft Visio – a worked example
  • Recent Comments

    silwoodblog on Data Model of SAP Bill of Mate…
    Gowri on Data Model of SAP Bill of Mate…
    silwoodblog on Using Safyr ‘ ETL for Me…
    Peter Millard on Using Safyr ‘ ETL for Me…
    Ken Bradshaw on Data Model of SAP Bill of Mate…
  • Archives

    • July 2015
    • May 2015
    • March 2015
    • February 2015
    • December 2014
    • November 2014
    • October 2014
    • April 2014
    • February 2014
    • December 2013
    • September 2013
    • July 2013
    • June 2013
    • August 2011
    • July 2011
  • Categories

    • Agile
    • Big Data
    • Business Process
    • Data
    • Data and Application integration
    • Data Discovery
    • Data Governance
    • Data Lineage
    • Data Management
    • Data MOdel
    • Data Model
    • Data Warehouse
    • Enterprise application
    • Enterprise Information Management
    • Enterprise Metadata Management
    • JD Edwards
    • Master Data Management
    • Metadata
    • Metadata discovery
    • Microsoft Dynamics
    • MIcrosoft Visio
    • Oracle
    • PeopleSoft
    • S4/HANA
    • Safyr
    • Salesforce
    • SAP
    • SAP BW
    • SAP HANA
    • Saphir
    • Siebel
    • Uncategorized
  • Meta

    • Register
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.com
silwoodtechnology
Blog at WordPress.com.
silwoodtechnology
Blog at WordPress.com.
Cancel

 
Loading Comments...
Comment
    ×