Write back data to a model with add record node
Cargo data models are not read-only source of data and triggers, but a persistence layer that can serve as a single source of truth about leads, deals and other objects important from business perspective.
While most of the data models are always updated by the respective connectors to keep it in sync with external systems, custom and http models permit writing data directly from a workflow.
Before you begin
Ensure you have configured HTTP connector and a data model.
First, while in the workflow editor, add a new node by creating a link from the start node.
This will open up nodes catalog, search for model record node.
In the new node select relevant data model. Again, only some data models will be available, for instance HTTP connector models. Select insert action.
The last part of the configuration is to specify the data object that will populate the record columns.
We need to fill in the data field. It needs to return a JSON object, which can be achieved by writing following expression:
{{
JSON.parse(`{
"_id": "${nodes.start._id}",
"_title": "${nodes.start.email}",
"_emitted_at": "${nodes.start._emitted_at}",
"email": "${nodes.start.email}",
"email_domain": "${nodes.start.email_domain}",
"job_title": "${nodes.start.job_title}",
"is_business_email": "TRUE",
"ai_decision_maker_score": "FAILED"
}`)
}}
Now we can head to data module and verify if our new record was saved as expected. Click data in the left sidebar and find the relevant data model. Clik on the model and in the new table search for the latest record.
That's how our new data model record looks like. If we have other workflows triggering based of that data model they should already run with this new entry.
Finish line
You've learned how to write data back to models using the add record node. This enables you to:
- Create new records programmatically within workflows
- Maintain a single source of truth for your business data
- Trigger downstream workflows based on the newly created records