Five things you need to know about OpenTelemetry

Cristina De Luca -

July 07, 2022

OpenTelemetry, a data model and semantic conventions that allow logs from multiple sources to be represented for observability purposes, is being widely promoted as a key element in enabling organizations to extend, manage, and optimize visibility across the IT infrastructure in their complex multi-cloud environments. 

In practice, it behaves as an “integration” layer between your observability data collection (logs, metrics, traces) and your data stores. You don’t have to be tied to one vendor but can combine them. For example, you will use Telegraf to collect metrics, but store those metrics in Prometheus. Observability data can and should be supported by your standard monitoring tools, such as PRTG, or Nagios.

Monitoring allows us to detect if something is not working properly, but does not give us the reasons why. Moreover, it is only possible to monitor previously predicted (known) situations. Observability, on the other hand, is based on the integration and relationship of multiple sources of telemetry data, which together help us to better understand how the software system under observation works, and not just to identify problems. However, the most critical aspect is what is done with the data after it is collected, for example, why rely on predefined thresholds when we can automatically detect unusual ‘change points’?

In short, OpenTelemetry makes telemetry robust and portable. In the opinion of João Fabio de Valentin, Head of Cisco AppDynamics for Latin America, “it is attractive to organizations that want to meet growing customer demands for complete digital experiences. 

But he himself acknowledges that with so much noise around the concept of OpenTelemetry, it is important to have a clear understanding of this new data standard and how it can be applied to monitor the entire application stack and directly link application performance to business results.

In addition to knowing the full range of revolutionary benefits that Open Telemetry can offer, IT professionals also need to recognize the limitations of this new monitoring method so that they can ensure that they have the proper tools and platforms in place to maximize the platform’s full potential.

With this in mind, Cisco lists five essential factors that every IT professional should know about OpenTelemetry:

  1. It is the “lingua franca” of full-stack telemetry

Because it is an open-source, global telemetry standard that is available to all, at no cost involved, and capable of generating, collecting, and exporting metrics, OpenTelemetry has been implemented and supported by cloud service providers, observability vendors (including AppDynamics), and end consumers in many different markets.

  1. Visibility into the entire IT infrastructure

OpenTelemetry generates telemetry data across the entire IT stack, where professionals previously had little visibility. Now they can quickly and easily gain real-time visibility into any environment. 

For startups, it is an agile and cost-effective way to create observability from day one, through APIs and SDKs. While for large enterprises, the platform enables data management in increasingly fragmented and complex IT environments.

  1. Flexibility, choice and blocking of suppliers

It completely eliminates vendor lock-in which has previously been a barrier for many IT teams. That’s because, with OpenTelemetry, companies can duplicate performance and availability data and send it to multiple places, whether it’s specific tools or an enterprise-grade observability solution. 

This means that any team, whether CloudOps, SRE or ITOps, has the freedom and flexibility to choose the most appropriate tools to understand the telemetry data it collects.

  1. Helps to understand Telemetry data

OpenTelemetry is excellent at collecting isolated data and putting it together, however it does not provide a more comprehensive view of performance. In other words, it is only concerned with generating data; which does not help IT teams understand the large volumes of data they create. 

“It is vital that IT professionals understand how to consume, process and relate the wealth of telemetry data in their full-stack observability solutions, using advanced analytics, machine learning and AI, to gain a consolidated and holistic view of their data. Only with this level of insight will they be able to eliminate data noise, make informed decisions and prioritize actions based on potential business impact,” explains de Valentin.

  1. Needs to be a component of a full-stack observability strategy

Full-stack observability has become an important focus for companies across all industries. AppDynamics’ latest report, The Journey to Observability, revealed that more than half (52%) of Brazilian companies have already started their journey towards full-stack observability, while 24% are in the early stages and 22% are ready to start this process.

Without a doubt, many are now looking to OpenTelemetry as the key enabler for achieving their full observability goals in the coming months. They recognize how this open framework can quickly accelerate their efforts to generate unified 360° visibility into their IT environments, including on-premises, public, and hybrid cloud.

Source: AppDynamics

However, for OpenTelemetry to deliver on its promise and provide real value, practitioners need to ensure they have the tools to turn telemetry data into meaningful insights. They need to be able to process and integrate telemetry data into their full-stack observability platforms.

“The teams that can do this will be perfectly positioned to move forward with their full-stack observability programs, with the data and insights needed to optimize IT performance and deliver always-on digital experiences,” he says.