Showing posts with label CMDB. Show all posts
Showing posts with label CMDB. Show all posts

08 May, 2013

Redefining traceability in Enterprise Architecture and implementing the concept with TOGAF 9.1 and/or ArchiMate 2.0

One of the responsibilities of an Enterprise Architect is to provide complete traceability from requirements analysis and design artefacts, through to implementation and deployment.

Along the years, I have found out that the term traceability is not always really considered in the same way by different Enterprise Architects.

Let’s start with a definition of traceability. Traceable is an adjective; capable of being traced. Trying to find a definition even from a dictionary is a challenge and the most relevant one I found on Wikipedia which may be used as a reference could be “The formal definition of traceability is the ability to chronologically interrelate uniquely identifiable entities in a way that is verifiable.”

In Enterprise Architecture, traceability may mean different things to different people.

 

Some people refer to

· Enterprise traceability which proves alignment to business goals

· End-to-end traceability to business requirements and processes

· A traceability matrix, the mapping of systems back to capabilities or of system functions back to operational activities

· Requirements traceability which assists in quality solutions that meets the business needs

· Traceability between requirements and TOGAF artifacts

· Traceability across artifacts

· Traceability of services to business processes and architecture

· Traceability from application to business function to data entity

· Traceability between a technical component and a business goal

· Traceability of security-related architecture decisions

· Traceability of IT costs

· Traceability to tests scripts

· Traceability between artifacts from business and IT strategy to solution development and delivery

· Traceability from the initial design phase through to deployment

· And probably more

 

The TOGAF 9.1 specification rarely refers to traceability and the only sections where the concept is used are in the various architecture domains where we should document a requirements traceability report or traceability from application to business function to data entity.

The most relevant section is probably where in the classes of architecture engagement it says:

“Using the traceability between IT and business inherent in enterprise architecture, it is possible to evaluate the IT portfolio against operational performance data and business needs (e.g., cost, functionality, availability, responsiveness) to determine areas where misalignment is occurring and change needs to take place.”

And how do we define and document Traceability from an end user or stakeholder perspective? The best approach would probably to use a tool which would render a view like in this diagram:

image

 

In this diagram, we show the relationships between the components from the four architecture domains. Changing one of the components would allow doing an impact analysis.

Components may have different meanings as illustrated in the next diagram:

 

image

Using the TOGAF 9.1 framework, we would use concepts of the Metamodel. The core metamodel entities show the purpose of each entity and the key relationships that support architectural traceability as stipulated in the section 34.2.1 Core Content Metamodel Concepts.

So now, how do we build that traceability? This is going to happen along the various ADM cycles that an enterprise will support. It is going to be quite a long process depending on the complexity, the size and the various locations where the business operates.

 

There may be five different ways to build that traceability:

· Manually using an office product

· With an enterprise architecture tool not linked to the TOGAF 9.1 framework

· With an enterprise architecture tool using the TOGAF 9.1 artifacts

· With an enterprise architecture tool using ArchiMate 2.0

· Replicating the content of an Enterprise Repository such as a CMDB in an Architecture repository

 

1. Manually using an office product

You will probably document your architecture with the use of word processing, spread sheets and diagramming tools and store these documents in a file structure on a file server, ideally using some form of content management system.

Individually these tools are great but collectively they fall short in forming a cohesive picture of the requirements and constraints of a system or an enterprise. The links between these deliverables soon becomes non manageable and in the long term impact analysis of any change will become quite impossible. Information will be hard to find and to trace from requirements all the way back to the business goal that drives it. This is particularly difficult to achieve when requirements are stored in spread sheets and use cases and business goals are contained in separate documents. Other issues such as maintenance and consistency would have to be considered.

clip_image006

 

2. With an enterprise architecture tool not linked to the TOGAF 9.1 framework

Many enterprise architecture tools or suites provide different techniques to support traceability but do not really describe how things work and focus mainly on describing requirements traceability. In the following example, we use a traceability matrix between user requirements and functional specifications, use cases, components, software artifacts, test cases, business processes, design specifications and more.

Mapping the requirements to use cases and other information can be very labor-intensive.

clip_image008

Some tools also allow for the creation of relationships between the various layers using grids or allowing the user to create the relationships by dragging lines between elements.

Below is an example of what traceability would look like in an enterprise architecture tool after some time. That enterprise architecture ensures appropriate traceability from business architecture to the other allied architectures.

 

clip_image010

 

3. With an enterprise architecture tool using the TOGAF 9.1 artifacts

The TOGAF 9.1 core metamodel provides a minimum set of architectural content to support traceability across artifacts. Usually we use catalogs, matrices and diagrams to build traceability independently of dragging lines between elements (except possibly for the diagrams). Using catalogs and matrices are activities which may be assigned to various stakeholders in the organisation and theoretically can sometimes hide the complexity associated with an enterprise architecture tool.

 

image

Using artifacts creates traceability. As an example coming from the specification; “A Business Footprint diagram provides a clear traceability between a technical component and the business goal that it satisfies, while also demonstrating ownership of the services identified”. There are other artifacts which also describe other traceability: Data Migration Diagram and Networked Computing/Hardware Diagram.

 

4. With an enterprise architecture tool using ArchiMate 2.0

Another possibility could be the use of the ArchiMate standard from The Open Group. Some of the that traceability could also be achievable in some way using BPMN and UML for specific domains such as process details in Business Architecture or building the bridge between Enterprise Architecture and Software architecture.

With ArchiMate 2.0 we can define the end to end traceability and produce several viewpoints such as the Layered Viewpoint which shows several layers and aspects of an enterprise architecture in a single diagram. Elements are modelled in five different layers when displaying the enterprise architecture; these are then linked with each other using relationships. We differentiate between the following layers and extensions:

· Business layer

· Application layer

· Technology layer

· Motivation extension

· Implementation and migration extension

The example from the specification below documents the various architecture layers.

clip_image014

As you will notice, this ArchiMate 2.0 viewpoint looks quite similar to the TOGAF 9.1 Business Footprint Diagram which provides a clear traceability between a technical component and the business goal that it satisfies, while also demonstrating ownership of the services identified.

Another example could be the description of the traceability among business goals, technical capabilities, business benefits and metrics. The key point about the motivation extension is to work with the requirement object.

Using the motivation viewpoint from the specification as a reference (motivation extension), you could define business benefits / expectations within the business goal object, and then define sub-goals as KPIs to measure the benefits of the plan and list all of the identified requirements of the project / program. Finally, you could link these requirements with either application or infrastructure service object representing software or technical capabilities. (Partial example below).

 

clip_image016

 

One of the common questions I have recently received from various enterprise architects is “Now that I know TOGAF and ArchiMate… how should I model my enterprise? Should I use the TOGAF 9.1 artifacts to create that traceability? Should I use ArchiMate 2.0? Should I use both? Should I forget the artifacts...”. These are good questions and I’m afraid that there is not a single answer.

What I know is that if I select an enterprise architecture tool supporting both TOGAF 9.1 and ArchiMate 2.0, I would like to be able to be able to have a full synchronization. If I model a few ArchiMate models I would like my TOGAF 9.1 artifacts to be created at the same time (catalogs and matrices) and if I create artifacts from the taxonomy, I would like my ArchiMate models also to be created.

Unfortunately I do not know the current level of tools maturity and whether tools vendors provide that synchronization. This would obviously require some investigation and should be one of the key criteria if you were currently looking for a product supporting both standards.

 

5. Replicating the content of an Enterprise Repository such as a CMDB in an Architecture repository

This other possibility requires that you have an up to date Configuration Management Database and that you developed an interface with your Architecture Repository, your enterprise architecture tool. If you are able to replicate the relationships between the infrastructure components and applications (CIs) into your enterprise architecture tool that would partially create your traceability.

If I summarise the various choices to build that enterprise architecture traceability, I potentially have three main possibilities:

 

image

 

Achieving traceability within an Enterprise Architecture is key because the architecture needs to be understood by all participants and not just by technical people.  It helps to incorporate the enterprise architecture efforts into the rest of the organization and it takes it to the board room (or at least the CIO’s office) where it belongs.

· Describe your traceability from your Enterprise Architecture to the system development and project documentation.

· Review that traceability periodically, making sure that it is up to date, and produce analytics out of it.

If a development team is looking for a tool that can help them document, and provide end to end traceability throughout the life cycle EA is the way to go Make sure you use the right standard and platform. Finally, communicate and present to your stakeholders the results of your effort.

19 March, 2011

Creation of a strategy for the consumption and management of Cloud Services in the TOGAF Preliminary Phase

In a previous article I described the need to define a strategy as an additional step in the TOGAF 9 Preliminary Phase. This article describes in more details what could be the content of such a document, what are the governance activities related to the Consumption and Management of Cloud Services.

image

Before deciding to switch over to Cloud Computing, companies should first fully understand the concepts and implications of an internal IT investment or buying this as a service. There are different approaches which may have to be considered from an enterprise level when Cloud computing is considered: Public Cloud vs Private Clouds vs Hybrid Clouds. Despite the fact that many people already know what the differences are, below some summary of the various models

· A public Cloud is one in which the consumer of Cloud services and the provider of cloud services exist in separate enterprises. The ownership of the assets used to deliver cloud services remains with the provider

· A private Cloud is one in which both the consumer of Cloud services and the provider of those services exist within the same enterprise. The ownership of the Cloud assets resides within the same enterprise providing and consuming cloud services. It is really a description of a highly virtualized, on-premise data center that is behaving as if it were that of a public cloud provider

· A hybrid Cloud combines multiple elements of public and private cloud, including any combination of providers and consumers

image

Once the major Business stakeholders understand the concepts, some initial decisions may have to be documented and included in that document. The same may also apply to the various Cloud Computing categorisations such as diagrammed below:

image

The categories the enterprise may be interested in related to existing problems can already be included as a section in that document.

Quality Management

There is need of a system for evaluating performance, whether in the delivery of Cloud services or the quality of products provided to consumers, or customers. This may include:

· A test planning and a test asset management from Business requirements to defects

· A Project governance and release decisions based on some standards such as Prince 2/PMI and ITIL

· A Data quality control (all data uploaded to a Cloud computing service provider must ensure it fits the requirements of the provider). This should be detailed and provided by the provider

· Detailed and documented Business Processes as defined in ISO 9001:

o Systematically defining the activities necessary to obtain a desired result

o Establishing clear responsibility and accountability for managing key activities

o Analyzing and measuring of the capability of key activities

o Identifying the interfaces of key activities within and between the functions of the organization

o Focusing on the factors such as resources, methods, and materials that will improve key activities of the organization

o Evaluating risks, consequences and impacts of activities on customers, suppliers and other interested parties

Security Management

This would address and document specific topics such as:

· Eliminating the need to constantly reconfigure static security infrastructure for a dynamic computing environment

· Define how services are able to securely connect and reliably communicate with internal IT services and other public services

· Penetration security checks

· How a Security Management/System Management/Network Management teams monitor that security and the availability

Semantic Management

The amount of unstructured electronic information in enterprise environments is growing rapidly. Business people have to collaboratively realise the reconciliation of their heterogeneous metadata and consequently the application of the derived business semantic patterns to establish alignment between the underlying data structures. The way this will be handled may also be included.

IT Service Management (ITIL)

IT Service Management or IT Operations teams will have to address many new challenges due to the Cloud. This will need to be addressed for some specific processes such as:

· Incident Management

o The Cloud provider must ensure that all outages or exceptions to normal operations are resolved as quickly as possible while capturing all of the details for the actions that were taken and are communicated to the customer.

· Change Management

o Strict change management practices must be adhered to and all changes implemented during approved maintenance windows must be tracked, monitored, and validated.

· Configuration Management (Service Asset and...)

o Companies who have a CMDB must provide this to the Cloud providers with detailed descriptions of the relationships between configuration items (CI)

o CI relationships empowers change and incident managers need to determine that a modification to one service may impact several other related services and the components of those services

o This provides more visibility into the Cloud environment, allowing consumers and providers to make more informed decisions not only when preparing for a change but also when diagnosing incidents and problems

· Problem Management

o The Cloud provider needs to identify the root cause analysis in case or problems

image

· Service Level Management

o Service Level Agreements (or Underpinning contracts) must be transparent and accessible to the end users. The business representatives should be negotiating these agreements. They will need to effectively negotiate commercial, technical, and legal terms. It will be important to establish these concrete, measurable Service Level Agreements (SLAs) without these and an effective means for verifying compliance, the damage from poor service levels will only be exacerbated

· Vendors Management

o Relationship between a vendor and their customers changes

o Contractual arrangements

· Capacity Management and Availability Management

o Reporting on performance

Other activities must be documented such as:

Monitoring

· Monitoring will be a very important activity and should be described in the Strategy document. The assets and infrastructure that make up the Cloud service is not within the enterprise. They are owned by the Cloud providers, which will most likely have a focus on maximizing their revenue, not necessarily optimizing the performance and availability of the enterprise’s services. Establishing sound monitoring practices for the cloud services from the outset will bring significant benefits in the long term. Outsourcing delivery of service does not necessarily imply that we can outsource the monitoring of that service. Besides, today very few cloud providers are offering any form of service level monitoring to their customers. Quite often, they are providing the Cloud service but not proving that they are providing that service.

· The resource usage and consumption must be monitored and managed in order to support strategic decision making

· Whenever possible, the Cloud providers should furnish the relevant tools for management and reporting and take away the onerous tasks of patch management, version upgrades, high availability, disaster recovery and the like. This obviously will impact IT Service Continuity for the enterprise.

· Service Measurement, Service Reporting and Service Improvement processes must be considered.

Consumption and costs

· Service usage (when and how) to determine the intrinsic value that the service is providing to the Business, and IT can also use this information to compute the Return On Investment for their Cloud computing initiatives and related services. This would be related to the process IT Financial Management.

image

Risk Management

The TOGAF 9 risk management method should be considered to address the various risks associated such as:

· Ownership, Cost, Scope, Provider relationship, Complexity, Contractual, Client acceptance, etc

· Other risks should also be considered such as : Usability, Security (obviously...) and Interoperability

Asset Management and License Management

When various cloud approaches are considered (services on-premise via the Cloud), hardware and software license management to be defined to ensure companies can meet their governance and contractual requirements

Transactions

Ensuring the safety of confidential data is a mission critical aspect of the business. Cloud computing gives them concerns over the lack of control that they will have over company data, and does not enable them to monitor the processes used to organize the information.

Being able to manage the transactions in the Cloud is vital and Business transaction safety should be considered (recording, tracking, alerts, electronic signatures, etc...).

There may be other aspects which should be integrated in this Strategy document that may vary according to the level of maturity of the enterprise or existing best practices in use.

When considering Cloud computing, the Preliminary phase will include in the definition of the Architecture Governance Framework most of the touch points with other processes as described above. At completion, touch-points and impacts should be clearly understood and agreed by all relevant stakeholders.

21 April, 2009

Keep an eye on OneCMDB and the CMDB Federation

OneCMDB Version 2.0 is a real interesting concept and product as this may be one of the first IT Service Management solution developed in an Open Source mode. It will not replace your Service Desk solution but may help companies with limited budget or companies which have a wide diversity of existing catalogs of assets. It is only covering Configuration Management as a process and in some way IT Assets management. For those who are using Nagios, there exist some connectors.



This has been initially developed by Lokomo Systems using Java but I’m not sure how does that fit with the CMDB Federation Group (I wrote a post on the subject in 2007) if it still exists….(I haven’t seen any indication of activity since January 2008 and maybe this is a dead project…).


In any case, keep an eye on both…

21 August, 2007

Do not invest too much in building your CMDB now!

Interesting enough a few days ago a new version of the CMDB federation Whitepaper has been posted. The list of participants is impressive as we find every key players in ITSM including Microsoft…CA, IBM, BMC, Fujitsu, HP and Microsoft have worked together to propose a standard for federating data from ITIL compliant CMDB repositories. The intention is to submit their achievement to a standard body sometimes this year. For the time being, this is a draft which is reviewed by the main IT Service Management actors.

In this current version, there are explanations how to push or pull information from/to various CMDBs into a main CMDB. These “other CMDBs” are named: MDR, Management Data Repositories. Exchange mechanisms and technologies used to access these MDRs will be based on the SOA stack (HTTP, SOAP, WSDL, XML).



What does that mean concretely?

· In a few months (years?) vendors could sell Service Management solutions with a different metamodel
· Instead of using autodiscovery tools, vendors could provide these push pull mechanisms for companies which already have in place other platforms such as Asset Management systems, or system management platforms, or other inventory systems
· It becomes maybe un-necessary to build proprietary bridges with systems which contain important information to build a CMDB. This could be postponed.
· Vendors which actually come with tactical scenarios such as HP-Mercury with their UCMDB could change their strategy sooner than expected!
· Companies in a CMDB project may have to “migrate” in the future to another approach, based probably on a hub and spoke architecture
· Taxonomy being a key challenge for such a project may impact existing CMDB implementation

Companies which started to build a CMDB should measure the impact of that initiative in the long term with their vendors as the CMDB landscape could change dramatically in the next months.

20 February, 2007

Release Management should be utilized in a SOA environment

Release Management is one of the Service Support ITIL processes that allows planning and overseeing the successful rollout of software and related hardware. Deploying Release Management will encourage IT Management to designin and implement efficient procedures for the distribution and installation of changes to IT systems, and will ensure that hardware and software being changed is traceable, secure and that only correct, authorized and tested versions are installed. IT Operations groups that are implementing ITIL Release Management in a SOA context should collaborate with Application Development leaders to extend those processes for new kinds of distributed applications and services.

IT Service Management and SOA Governance concepts


Application Development teams building SOA solutions either internally or externally or mixing both approaches must consider the ITIL Release Management process to rollout any software and hardware components.


ITIL is a set of Best Practice recommendations for IT Service Management. ITIL consists of a series of publications giving guidance on the provision of Quality IT Services, and on the Processes and facilities needed to support them. These services which are used by the user/customer can take the form of applications that they use (e.g. email services, components of HR systems, ERP and financial systems) or other services which are utilized, such as internet access, printing services, etc.


A SOA Service is defined as a unit of work to be performed on behalf of some computing entity, such as a human user or another program. SOA defines how two computing entities, such as programs, interact in such a way as to enable one entity to perform a unit of work on behalf of another entity.

The SOA Service is much more granular that an IT Service and the latest can be the aggregation of several SOA Services. For these reasons, more and more IT Service Management and SOA will relate to each other and be part of an SOA Governance framework which should really be considered as part of a broader IT Governance strategy.



SOA is introducing many independent and self-contained moving parts, components that are typically widely reused across the enterprise and are a vital part of mission-critical business processes; it becomes critical to properly manage the life cycle. SOA governance usually includes:


  • Lifecycle management. This involves the definition, the implementation and the enforcement of policies and processes across the entire SOA lifecycle.

  • Policy management. This process is used for a successful web services deployment ensuring that services, XML messages and transactions comply with local and global security and operational policies. This woulde include access-control list management, identity management authentication and authorization policies.

  • Contract management. This activity consits of managing the relationships between service consumers and providers. Policies, capabilities and Service Level Agreements (SLA) are negotiated. Service Level Management is an ITIL Service Delivery process can be used.

  • SOA metadata management. More and more data services are created based on various information sources, metadata centric visibility tying data services to their associated information sources participating in Data Integration techniques is critical. SOA Metadata management can be based on SOA Metadata repository and SOA Registry.

Activities within Release Management must also cover SOA solutions but have additional constraints


Release Management helps to communicate and manage expectations of the Customer during the planning and rollout of new Releases. It also allows agreeing the exact content and rollout plan for the Release, through liaison with the Change Management process. New software or hardware releases are then implemented into the operational environment using the controlling processes of Configuration Management and Change Management. Also, a Release should be under Change and Configuration Management and may consist of any combination of hardware, software, firmware and document Configuration Items.


Release Management activities in the development, control, test and live environments include:



  • Release policy and planning. The Release Policy document covers Release numbering, frequency, the level in the IT infrastructure that will be controlled by definable Releases. The SOA policy defines configurable rules and conditions that affect services during design time and at runtime. The SOA policy is used to validate services at design-time, well before they're released to consumers, and is used to enforce specific standards and behaviors at runtime. The Release Policy has to include instructions such that deployed SOA solutions have to comply with the SOA Policy. Release Planning would also have to include applications running on SOA infrastructure.

  • Release design, build and configuration. The Release design of a SOA application differs from a classical application as several web services can be aggregated for a composite application from in-house developments or from third party components. License, support a and Service level Agrements will have to be defined at the web service level and Application Development groups will have to negotiate at the component level with the different vendors. The Release Build will require additional efforts because of the highest granularity of software components and also because an impact analysis is required to identify what other applications could be affected. Configuration will require detailed procedures for installation from all web services providers.

  • Release acceptance. This activity is responsible for testing a Release, and its implementation and Back-out Plans, to ensure they meet the agreed Business and IT Operations Requirements. Additional consideration has to be given to existing applications already using some component of the release. An impact analysis will conduct to a non-regression testing for other applications already using some components. A controlled test environment must be configured to replicate the current live version also taking into account external web services. Vendors should be able to contribute to the release and provide as well a test environenment.

  • Rollout planning. First of all it is almost impossible to agree a rollout plan without consulting the customers; indeed they are an integral part of the planning. So to meet this goal companies will need to work closely with the customers to prepare a release or rollout plan that not only meets IT needs but also takes into account customer availability and their business deliverables. Once the plan is agreed, the IT department will need to provide constant feedback to the customers during the processing of the Release or rollout. Ideally the customers should be able to view the plan on-line any time and should receive regular reports from the plan team leader. SOA applications do not really impact this activity as customers are often not aware of the underlying architecture.

  • Extensive testing to predefined acceptance criteria. Testing of a Build or Release to ensure that the parts including the web services work correctly together. SOA will require additional non-regression testing because of the potential share of components between old and new applications. Tests may have to include vendors’ components when used and will require dedicated tests vendors environments even if the web service is hosted somewhere else.

  • Signoff of the Release for implementation.

  • Communication, preparation and training.

  • Distribution and installation. This will cover the installation of new or upgraded hardware and the distribution and installation of software. The ITIL Definitive Software Library (DSL) which is the storage of controlled software in both centralized and distributed systems will also contain the SOA hardware and software components. External components will have to be documented in a SOA registry-repository as they will be hosted externally.

SOA Repository and Registry


The ability to register, discover, and manage Web services is an essential requirement for any SOA implementation. This need may not be fully appreciated in the early stages of an SOA rollout when dealing with a small number of services but becomes almost mandatory when there is a need to support a large number of Web services. When the number of services deployed grows to dozens or hundreds, centralized facilities for access and control of service metadata and artifacts becomes critical. A service registry provides these capabilities and becomes a key infrastructural component. First generation service registries were based on the UDDI standard but new products have recently emerged from various vendors inspired by the standard.


SOA Repositories and registries should integrate with CMDBs (Figure 1).


Figure 1 Registry and Repository linked to a CMDB


Product choices and strategies

  • Systinet is a mature solution but requires further integration in the HP IT Service Management suite. The roadmap for HP’s IT Service Management soultions identified Peregrine ServiceCenter has the evolution for the HP OpenView Service Desk. However as HP also acquired Mercury, the future of the HP CMDB will target Appilog (Now Mercury Application Mapping). In January 2006 Mercury extended its offering with Systinet which provides the foundation for SOA Governance and lifecycle management. Mercury ITG for the time being manages Change and Release Management, Peregrine ServiceCenter manages Change Management, and Systinet 2 manages SOA Changes. The Governance Interoperability framework (GIF) developed by multiple SOA vendors does not seem to cover the integration with a federated CMDB[i].
  • IBM proposes its Websphere Service Registry and Repository V6 which integrates with the Tivoli CCMDB. This solution will communicate with Tivoli CCMDB which manages Changes and Configuration. The Tivoli CCMDB also integrates with IBM Tivoli Release Process Manager and Tivoli Configuration Manager which are other Release Management modules. The end to end solution will have to be validated.
  • Flashline which joined in august 2006 the BEA Aqualogic product family is a new offering but BEA until now distributed Systinet. Flashline is now the BEA Aqualogic Enterprise Repository, has out of the box connectors source code management systems but does not have yet integration with ITSM suites.
  • Webmethods acquired Infravio but seems more focused on new clients acquisition with an integration with Fabric. Its governance edition integrates with System Management suites such as IBM Tivoli, BMC Patrol, CA Unicenter but custom development would be required.
  • Amberpoint is a Web services management product which completes a repository-registry. Amberpoint integrates into any IT environment and specifically system management suites, but does not consider yet ITSM (although it does deliver a module related to SLM).

Release Management is a pre-requisite to properly manage the service lifecycle


IT infrastructure and operations/engineering professionnals using IT Service Management and starting SOA programs should evaluate the maturity of their Release Management process or consider its implementation. The acquisition of a SOA Governance platform must take into consideration existing IT Service Management suites in order to have as a target an end to end view of the SOA components deployment. Without any integration, operations staffs will not be able to quickly track the end to end life cycle and carry out root cause analyse in an efficient way in case of problems.

  • Review the process activities and ensure they take into consideration components based on SOA infrastructures. Release policy and planning will be adapted to SOA solutions implementations taking into consideration the use of potential external and distributed web services. Design and build of a Release will have to integrate more granular components which can be hosted externally and sold by vendors. Service Level Agreements will have to be defined not only at the IT Service level but also at the Web service level. Testing will have to cover non-regression for shared application components and situations were componets are hosted externally.
  • Evaluate cautiously SOA Governance platfoms. SOA Management platforms, metadata repositories and registries should take into account not only Release and Change Management, but also Service Level Management from Service Management suites. Identify the vendor’s strategy in terms of either partnership with ITSM vendors, or internal roadmaps such as IBM and HP.
  • Understand the level of integration required between products. A complete integration would be ideal but the existing solutions are not yet there. Some vendors have started to understand the need for integration between a SOA Registry and repository and a CMDB in a federated way, the repository-registry being considered as a specialized database in this federation. Some others are only provinding APIs to system management solutions.
  • Ensure that Underpinning Contracts are covering Release Management for third party components. Companies building applications either composite or integrating in BPM activities third party web services should define in a SLA with the vendors how new external components versions should be managed in the Release Management process. Vendors should not be allowed to upgrade customers web services without any authorization if hosted externally. The contract should specify that any new component modification will be part of the customer’s IT Operations Releases covering testings.


What It means...


Convergence between the SOA registry, repository and the CMDB


The registry and repository of SOA has allowed convergence of the development time asset repository with the run-time service registry. We must now proceed into the management repository. As we frequently see another repository associated with that space called the Configuration Management Database, or CMDB. A convergence across this space is required in order to be able to correctly track the end to end life cycle of Web services as well as to maintain an up to date software and hardware information. Part of the metadata associated with a service needs to be the machines where it is deployed. The chances are that information may already be in the CMDB.


Non ITSM shops will integrate SOA Governance Platforms with adhoc deployment solutions


Companies which have not considered ITIL as an IT Operations framework can deploy a SOA Repository and Registry without fully considering the Release Management process. They will be able to manage the service lifecycle and if they have a software deployment product, they will use it for deployment. Reconciliaton between the various products will have to kept manually or integration development will be required. However IT Operations group haven’t yet endorsed IT Service Management and starting to embrace ITIL will gain substential benefits in terms of customer’s quality of service. Release Management among other processes such as Change and Service Level Management will definitly improve the quality of SOA solutions.


[i] Ten leading SOA vendors have partnered with Systinet for the GIF, including Above All Software, Actional, AmberPoint, Composite Software, DataPower, HP, Layer 7 Technologies, MetaMatrix, Reactivity, and Service Integrity.

08 December, 2006

"The CMDB Initiative..”, but where is the evolution?

Late April, a group of system and service management vendors decided to propose a standard related to a repository for IT assets, and their configurations items change over time.

In other words, BMC, Fujitsu, IBM and HP decided to create a model for a CMDB which would allow storing information such as desktop and laptop client images or configurations, servers, storage pools or networks and so on.

Very often information is spread among various sources and no standards exist on how to exchange meta data from all these potential sources of CMDB information. Today, the CMDB interfaces that exist are all proprietary, which is the problem that this group wants to tackle.
This working group will issue a white paper within the next month that will spell out their initial goals in more detail. And by the end of the year, they hope to have a draft specification proposal, at which point they hope to formalize the process by choosing a standards body.

But is that enough?

Focusing on data exchange is fine but shouldn’t they also consider the CMDB Meta data and propose a common model eventually re-usable by third parties?

All vendors’ CMDB have their own Meta model and as from now I have never seen from any vendor a roadmap related to the repository. As an example, with the acquisition of both Mercury and Peregrine, HP initially announced the migration from the HP Openview Service Desk, to Service Center, and now has announced that next CMDB would be the one from Mercury, which is in fact…Appilog… (An acquisition from Mercury). So what!

There is a need for a “next generation CMDB” for the following reasons:

- BPM based on a SOA architecture invoque IT components (software on hardware…) and we should have a link between a Business Process and the underlying Cis. A CMDB should also be process based.
- Relation is important that’s for sure but dependencies is another key topic. How does infratructure relates one to the other, how information relates one to the other, how physically components relates one to the other, how does application code relates (Cendura acquired by CA is maybe one of the only company which deleivred that capability) etc… A CMDB should be also able to add dependencies on the top of relationships.

This initiative will help the concept of federated CMDB and information exchange but will not really look at today’s requirements… A unified Meta model could be an interesting initiative for these vendors as this would create a new generation of unified Service management/Business Process management solutions.

05 October, 2006

CMDB and SOA registries convergence

Without doubt we will see a convergence between Service Desk CMDB’s and SOA Registries and repositories. SOA Governance solutions should cover several features such as (this is not exhaustive) :

- The governance itself (design, deployment, amd the run-time)
- Policies management
- The Service Life cycle Management
o Service Deployment
o Service management and monitoring
o Service publishing
o Service specification
o Service acquisition
o Service assembly
o Service testing
o Service design
o Service integration
o Service build and test
o Service asset management and publishing
- Contract Management
- Impact Management
- The Service Discovery and mediation
- Etc.

But we can foresee more an integration in terms of relationship that a full integrated solutions. In others terms the Service Life Cyle will require a Service to be managed in Service Management terms. A Web service will have incidents, be changed be released etc…

As en example, IBM just announced Websphere Service Registry and Repository. This solution will communicate with Tivoli CCMDB which will manage Changes and Configuration.

HP which acquired Mercury and Peregrine….also acquired Systinet, another SOA Governance solution. All the first 3 vendors have their CMDB, and once eventually integrated, wouldn’t this make sense to have connections between the Systinet Repository and “one of the HP CMDB”?

02 October, 2006

Configuration Management: Are there any Auto discovery tool companies left?

Eighteen months ago, the market counted something like a dozen of companies which were selling interesting products named “Auto discovery or mapping tools” (refer to my other post: Configuration Management: How to really build a CMDB with an auto discovery tool). Among them, there were, Appilog, Collation, Relicore, n-layers, Tideway Systems, Cendura.

Since that time, important software companies have considered these tools as probably being highly strategic because they would help to materialize Configuration Management , one of the most complex ITIL process.

IBM, Symantec, Mercury among others have acquired some of these companies, and the remaining ones would soon be considered for acquisition. That was my impression… end of last year.

Today Computer Associates has acquired Cendura, a small company (50 people) located in the Silicon Valley which also provide a nice Auto Discovery funtion with lots of granularity (the level of drilldown is quite impressive). This demonstrates clearly that without such functionality, Configuration Management was an unreachable ITIL process and irrealistic to be correctly implemented.

26 September, 2006

Configuration Management: How to really build a CMDB with an auto discovery tool

Service management projects that use a Configuration Management Database (CMDB) as central pointer to enterprise IT asset data should double in number over the next year; an independent research has concluded (Tertio Service Management Solutions). Configuration Management Database uptake is expected to double from 20% to 39% within a year, said the vendor behind the research. The reason for the increased popularity is down to a desire to adopt an ITIL based framework among 44% of respondents.

According to ITIL best practice recommendations, the CMDB should provide accurate information on the configuration of IT assets and their documentation in a way that will support all the other service management processes. For 70% of the organizations polled online during Tertio's research, the perceived benefits that results from implementation of a CMDB is expected to be an improved and coordinated picture of the application infrastructure, and the overall better management of IT assets.

Many companies who deployed Configuration Management have associated this process to the Asset/Inventory Management process/systems for the following reasons:

- Desktop Inventory is often based on inventory existing solutions
- Server Inventory is usually done with System Management solutions such as Tivoli, CA Unicenter, BMC Patrol, etc…

However, there is one activity missing from a Configuration Management view: the building of relationships based on Cis (configuration Items).

To be successful at Business Service Management, companies first should map applications and Services. This is probably the only way to know which IT failure is mucking up Business Services.

To address the issues with the Configuration Management process, companies should find a solution which would avoid creating manually the relationships between the various CIs (Configuration Items) which are identified with the various Inventory tools.

Companies need tools that collect systems configuration data, decode packets and watch kernel I/O to determine application dependencies. These are not service, performance or status monitors rather, they map all the servers, systems, applications, processes, services, users and sometimes even network devices to decipher what applications depend on, and what depends on them. This mapping also makes diagnostic, capacity and Service Management application delivery easier.

These tools should be able to:

- Improve the quality of data in a CMDB
- Be in real-time
- Populate data into the Service Desk
- Avoid to maintain manually the systems
- Keep in real time the relationship between CIs
- Improve all ITIL processes, but specifically Incident Management and Change Management
- Provide Change Impact Analysis features
- Track people changing the infrastructure into production
- Deliver Business Services views
- Map ITIL processes
- Integrate smoothly with both the Service Desk and the System Management solution
- Replicate these relationships into Business Service Management solutions
- Send alerts to the System Management console when for example a change on the Infrastructure is done in production
- Replicate these relationships with Enterprise Architecture platforms such as Mega, Troux technologies, Casewise, Popkin, etc..
- Provide out of the box connectors
- Cope with the long term strategy around Service Management and Enterprise Architecture


There used to be more that 10 to 15 vendors in that space, half of them have been acquired by companies such as Mercury, IBM, Symantec, BMC and others who integrated these tools into their Service Management products.

Companies haven’t yet considered implementing such solutions but this is slowly starting. Auto discovery tools are a must!

29 August, 2006

Will Service Management solutions become SOA Governance platforms?

As a starting point, let’s re-define what a Configuration Management Database (CMDB) is… A CMDB is a database that holds a complete record of all configuration items (CIs) associated with the IT Infrastructure, software, hardware, including information about servers, storage devices, networks, middleware, applications and data, i.e. versions, location, documentation, components and the relationships between them.

Configuration Management which is one of the main ITIL processes requires the use of support tools, which include a CMDB. Physical and electronic libraries should be set up parallel to the CMDB to hold definitive copies of software and documentation.

Until now, several vendors have provided through their Service Desk offering, and out of the box CMDB which in some case could be altered. Among these vendors we can find, BMC-Remedy, HP, Peregrine (now HP…), Axios systems, Computer Associates, Mercury (now HP…), IBM, and many others.

Last April, some vendors like CA, BMC, IBM, and Fujitsu announced they would work toward developing "an industry standard for federating and accessing IT information" that would ideally integrate communication between disparate configuration management databases.

CMDBs have become one of central elements of enterprise IT management, so a standards-based approach to this critical functionality is necessary and valuable.

Looking at SOA and the way we define composite applications and services, we definitely need to build the latest on the top of existing IT Infrastructure, software and hardware. In other words, a CMDB could also be used to manage the catalogue of SOA Services!

I would be tempted to think that in the next 2 years, a CMDB will be a modular component, usable by either Service Management solutions, and/or SOA Governance products. A CMDB could become a sort of “plugin” available from various vendors with sets of APIs, and why not web services.

SOAs are distributed computing plans where companies often situate Web services and reuse code and other assets to create efficiencies. Vendors like IBM, Microsoft, BEA, Oracle, and Mercury are creating SOA infrastructure platforms to speed information exchange between different computing machines. A few months ago, prior to HP acquisition, Mercury acquired a SOA company named Systinet. This acquisition strengthened Mercury's position in the high growth SOA (Service-Oriented Architecture) market by giving the company leading SOA governance and lifecycle management products.

An integration point should be between the SOA metadata repositories and a configuration management database (CMDB) to manage the lifecycle through to operations. If HP considers the range of acquisition they recently did with Peregrine, Mercury...and Systinet…there would be a high probability, this integration occurs!

Another observation on this future integration is potentially visible with IBM. IBM released recently a CMDB titled Tivoli CCMDB, and also launched a management and security solutions for managing SOA based applications, IBM launched last April IT Service Management platform.

The IBM IT Service Management platform manages SOA based composite applications. It is supposed to offer an approach to defining a framework and solutions for IT service management, including extending self-managing autonomic computing to IT services.

“Tivoli CCMDB uses a Federated model that allows it to be implemented on top of an existing sources of IT data, and serves as an authoritative source of data for configuration items, their relationships, so that when a change needs to be made to any of the IT components, one can understand the impact of that change on other related components. IBM's ITSM platform along with, IBM Tivoli security and compliance products like Tivoli Access Manager and Tivoli Federated Identity Manager delivers a complete end-to-end solution for the "manage", "secure" and "compliance" of distributed SOA applications.”

IBM and HP are two companies which will probably compete in both IT Service Management and SOA. They probably understood the synergy between the two worlds, and we can predict a future new generation of CMDBs, modular, accessible from web services, and used for several companies needs: Service Management and SOA Governance.