Skip to main content

Go SDK developer's guide - Observability

The observability section of the Temporal Developer's guide covers the many ways to view the current state of your Temporal ApplicationLink preview iconWhat is a Temporal Application

A Temporal Application is a set of Workflow Executions.

Learn more—that is, ways to view which Workflow Executions are tracked by the Temporal PlatformLink preview iconWhat is the Temporal Platform?

The Temporal Platform consists of a Temporal Cluster and Worker Processes.

Learn more and the state of any specified Workflow Execution, either currently or at points of an execution.

WORK IN PROGRESS

This guide is a work in progress. Some sections may be incomplete or missing for some languages. Information may change at any time.

If you can't find what you are looking for in the Developer's guide, it could be in older docs for SDKs.

This section covers features related to viewing the state of the application, including:

Metrics

Each Temporal SDK is capable of emitting an optional set of metrics from either the Client or the Worker process. For a complete list of metrics capable of being emitted, see the SDK metrics referenceLink preview iconSDK metrics

The Temporal SDKs emit metrics from Temporal Client usage and Worker Processes.

Learn more.

Metrics can be scraped and stored in time series databases, such as:

Temporal also provides a dashboard you can integrate with graphing services like Grafana. For more information, see:

To emit metrics from the Temporal Client in Go, create a metrics handler from the Client Options and specify a listener address to be used by Prometheus.

client.Options{
MetricsHandler: sdktally.NewMetricsHandler(newPrometheusScope(prometheus.Configuration{
ListenAddress: "0.0.0.0:9090",
TimerType: "histogram",
}

The Go SDK currently supports the Tally library; however, Tally offers extensible custom metrics reporting, which is exposed through the WithCustomMetricsReporterLink preview iconTemporal Server options

You can run the Temporal Server as a Go application by including the server package go.temporal.io/server/temporal and using it to create and start a Temporal Server.

Learn more API.

For more information, see the Go sample for metrics.

Tracing and Context Propogation

The Go SDK provides support for distributed tracing through OpenTracing. Tracing allows you to view the call graph of a Workflow along with its Activities and any Child Workflows.

Tracing can be configured by providing an opentracing.Tracer implementation in ClientOptions during client instantiation. For more details on how to configure and leverage tracing, see the OpenTracing documentation.

The OpenTracing support has been validated using Jaeger, but other implementations mentioned here should also work. Tracing functionality utilizes generic context propagation provided by the client.

Context Propagation

Temporal provides a standard way to propagate a custom context across a Workflow. You can configure a context propagator in via the ClientOptions. The context propagator extracts and passes on information present in context.Context and workflow.Context objects across the Workflow. Once a context propagator is configured, you should be able to access the required values in the context objects as you would normally do in Go. You can see how the Go SDK implements a tracing context propagator.

Server-Side Headers

On the server side, Temporal provides a mechanism for propagating context across Workflow transitions called headers.

message Header {
map<string, Payload> fields = 1;
}

Client leverages headers to pass around additional context information. HeaderReader and HeaderWriter are interfaces that allow reading and writing to the Temporal Server headers. The SDK includes implementations for these interfaces. HeaderWriter sets a value for a header. Headers are held as a map, so setting a value for the same key will overwrite its previous value. HeaderReader gets a value of a header. It also can iterate through all headers and execute the provided handler function on each header, so that your code can operate on select headers you need.

type HeaderWriter interface {
Set(string, *commonpb.Payload)
}

type HeaderReader interface {
Get(string) (*commonpb.Payload, bool)
ForEachKey(handler func(string, *commonpb.Payload) error) error
}

Context Propagators

You can propagate additional context through Workflow Execution by using a context propagator. A context propagator needs to implement the ContextPropagator interface that includes the following four methods:

type ContextPropagator interface {
Inject(context.Context, HeaderWriter) error

Extract(context.Context, HeaderReader) (context.Context, error)

InjectFromWorkflow(Context, HeaderWriter) error

ExtractToWorkflow(Context, HeaderReader) (Context, error)
}
  • Inject reads select context keys from a Go context.Context object and writes them into the headers using the HeaderWriter interface.
  • InjectFromWorkflow operates similar to Inject but reads from a workflow.Context object.
  • Extract picks select headers and put their values into the context.Context object.
  • ExtractToWorkflow operates similar to Extract but write to a workflow.Context object.

The tracing context propagator shows a sample implementation of a context propagator.

Is there a complete example?

The context propagation sample configures a custom context propagator and shows context propagation of custom keys across a Workflow and an Activity. It also uses Jaeger for tracing.

Can I configure multiple context propagators?

Yes. Multiple context propagators help to structure code with each propagator having its own scope of responsibility.

Useful Resources

The Go SDK provides support for distributed tracing through OpenTracing. Tracing allows you to view the call graph of a Workflow along with its Activities and any Child Workflows.

Tracing can be configured by providing an opentracing.Tracer implementation in ClientOptions during client instantiation.

For more details on how to configure and leverage tracing, see the OpenTracing documentation.

The OpenTracing support has been validated using Jaeger, but other implementations mentioned here should also work.

Tracing functionality utilizes generic context propagation provided by the Client.

Logging

Send logs and errors to a logging service, so that when things go wrong, you can see what happened.

The SDK core uses WARN for its default logging level.

In Workflow Definitions you can use workflow.GetLogger(ctx) to write logs.

import (
"context"
"time"

"go.temporal.io/sdk/activity"
"go.temporal.io/sdk/workflow"
)

// Workflow is a standard workflow definition.
// Note that the Workflow and Activity don't need to care that
// their inputs/results are being compressed.
func Workflow(ctx workflow.Context, name string) (string, error) {
// ...

workflow.WithActivityOptions(ctx, ao)

// Getting the logger from the context.
logger := workflow.GetLogger(ctx)
// Logging a message with the key value pair `name` and `name`
logger.Info("Compressed Payloads workflow started", "name", name)

info := map[string]string{
"name": name,
}


logger.Info("Compressed Payloads workflow completed.", "result", result)

return result, nil
}

Custom logger

Use a custom logger for logging.

This field sets a custom Logger that is used for all logging actions of the instance of the Temporal Client.

Although the Go SDK does not support most third-party logging solutions natively, our friends at Banzai Cloud built the adapter package logur which makes it possible to use third party loggers with minimal overhead. Most of the popular logging solutions have existing adapters in Logur, but you can find a full list in the Logur Github project.

Here is an example of using Logur to support Logrus:

package main
import (
"go.temporal.io/sdk/client"

"github.com/sirupsen/logrus"
logrusadapter "logur.dev/adapter/logrus"
"logur.dev/logur"
)

func main() {
// ...
logger := logur.LoggerToKV(logrusadapter.New(logrus.New()))
clientOptions := client.Options{
Logger: logger,
}
temporalClient, err := client.Dial(clientOptions)
// ...
}

Visibility

The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.

Search Attributes

The typical method of retrieving a Workflow Execution is by its Workflow Id.

However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments.

You can do this with Search AttributesLink preview iconWhat is a Search Attribute?

A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata.

Learn more.

The steps to using custom Search Attributes are:

Here is how to query Workflow Executions:

Use Client.ListWorkflow.

Custom Search Attributes

After you've created custom Search Attributes in your Cluster (using tctl search-attribute createor the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow.

Provide key-value pairs in StartWorkflowOptions.SearchAttributes.

Search Attributes are represented as map[string]interface{}. The values in the map must correspond to the Search Attribute's value type:

  • Bool = bool
  • Datetime = time.Time
  • Double = float64
  • Int = int64
  • Keyword = string
  • Text = string

If you had custom Search Attributes CustomerId of type Keyword and MiscData of type Text, you would provide string values:

func (c *Client) CallYourWorkflow(ctx context.Context, workflowID string, payload map[string]interface{}) error {
// ...
searchAttributes := map[string]interface{}{
"CustomerId": payload["customer"],
"MiscData": payload["miscData"]
}
options := client.StartWorkflowOptions{
SearchAttributes: searchAttributes
// ...
}
we, err := c.Client.ExecuteWorkflow(ctx, options, app.YourWorkflow, payload)
// ...
}

Upsert Search Attributes

You can upsert Search Attributes to add or update Search Attributes from within Workflow code.

In advanced cases, you may want to dynamically update these attributes as the Workflow progresses. UpsertSearchAttributes is used to add or update Search Attributes from within Workflow code.

UpsertSearchAttributes will merge attributes to the existing map in the Workflow. Consider this example Workflow code:

func YourWorkflow(ctx workflow.Context, input string) error {

attr1 := map[string]interface{}{
"CustomIntField": 1,
"CustomBoolField": true,
}
workflow.UpsertSearchAttributes(ctx, attr1)

attr2 := map[string]interface{}{
"CustomIntField": 2,
"CustomKeywordField": "seattle",
}
workflow.UpsertSearchAttributes(ctx, attr2)
}

After the second call to UpsertSearchAttributes, the map will contain:

map[string]interface{}{
"CustomIntField": 2, // last update wins
"CustomBoolField": true,
"CustomKeywordField": "seattle",
}

Remove Search Attribute

To remove a Search Attribute that was previously set, set it to an empty array: [].

There is no support for removing a field.

However, to achieve a similar effect, set the field to some placeholder value. For example, you could set CustomKeywordField to impossibleVal. Then searching CustomKeywordField != 'impossibleVal' will match Workflows with CustomKeywordField not equal to impossibleVal, which includes Workflows without the CustomKeywordField set.