Understanding workflow design


Introduction

Introduction

Designing well-functioning workflows in Cargo is like writing quality code.

It requires a clear plan before you dive in. This short section aims to make the user aware of some critical design choices that will improve the experience of building and maintaining Cargo workflows.

Nearly all Cargo workflows follow a simple logic: the workflow receives data, transforms it, and then pushes that data either to a 'system of records' (within Cargo or externally) or an 'activation tool' (e.g. email marketing or outreach tools).


Conception

Workflow conception

Workflows need to be unidirectional in their logic. This means that each node in the workflow only moves forward, building upon the data processed by the nodes that came before it.

When planning your workflow, start by mapping out the entire process from beginning to end. Place the starting and ending nodes first, using placeholder values, then incrementally introduce and test additional components to the execution flow.

Testing is crucial. After adding a new node, immediately test it with a few records. Utilize autocomplete features to avoid errors in data mapping. Create observability by organizing your data being mapped across different nodes by declaring explicit variables. Maximize abstraction in the workflow by consolidating repetitive actions at the top of the workflow, so that subsequent nodes can call upon that logic. Use clear and meaningful names for nodes and segments to maintain readability.


Naming

Node naming

Cover image

When incorporating a node from the node catalog into your workflow, it's initially assigned a unique, albeit generic, name. This name typically includes a reference to both the connector name and the action type it performs. However, for the sake of clarity and efficiency within the workflow, renaming these nodes to more accurately reflect their specific function is not just recommended—it's considered a best practice. You can also use the node description to provide better context about the node's function.

For example, consider a scenario where your workflow includes two read nodes from Apollo. The default naming convention might label them as apolloio_read, which doesn't offer much insight into their distinct roles within the workflow. If one node is tasked with searching through the accounts objects and the other through contacts objects, renaming them to apollo_accounts and apollo_contacts respectively clarifies their functions at a glance. This hugely simplifies future modifications or troubleshooting as the workflows become bigger and more complex.

Moreover, if there are multiple apollo nodes being used, then use the node description to specify the input and desired output, e.g. retrieving marketing executives belonging to the domain


Naming

Segment naming

Segments in a workflow serve as targeted filters on a data model, allowing for refined data manipulation and analysis. It's common to have multiple segments referencing the same model, often differentiated by slight, yet critical, variations. Given that segment names are directly accessible within the workflow, they can play a pivotal role in facilitating conditional logic.

Crafting segment names with precision and thoughtfulness can significantly enhance the workflow's functionality and readability. For instance, if segments are designed to categorize contacts based on engagement levels—such as 'high_engagement' and 'low_engagement' — these names can directly inform conditional paths within the workflow, triggering specific actions based on the engagement category of the data in question.


Execution

Managing execution speed

Cargo workflows handle enrolled records asynchronously, meaning one record's execution doesn't await another record to finish. Typically, execution in workflows happens rapidly.

Making external API calls, however, introduces delays as the speed of execution is bottlenecked by the rate limits allowed by these services.

While designing workflows, the strategy to mitigate latency includes minimizing dependency on slow APIs, avoiding unnecessary API calls, and optimizing third-party API usage.


Modularity

Modularising repeatable workflow components

There's a trade-off involved in choosing between building large, monolithic workflows and creating systems of smaller, interconnected workflows.

A single, comprehensive workflow offers straightforward control but can become rigid and prone to bottlenecks. On the other hand, a modular approach, with separate but connected workflows, offers greater flexibility and can better adapt to various requirements.

For instance, if most of your workflows involve a similar lead assignment logic at the end, it might be advisable to create two modular workflows that handle the assignment for all workflows and are triggered by the other workflows. This is a similar logic to creating 'utils' inside a devleopment codebase.

Similarly, if most of your workflows involve a similar lead enrichment logic in the middle, you could create a Cargo 'tool' that you can pull into each of those workflows without having to rewrite the logic.

By understanding these principles and applying them thoughtfully, you can create efficient and reliable workflows in Cargo, tailored to your specific needs and constraints.