Datacake is a user-friendly IoT platform that empowers you to easily create and manage custom IoT applications without the need for coding skills. It seamlessly integrates with different types of IoT devices.
With over 18, 000 users and 1, 000+ successful projects across 50+ countries, Datacake offers templates for popular LoRaWAN devices and supports major network servers.
Whether you're monitoring water levels, enhancing building management, or automating IoT device management, Datacake provides the tools and flexibility to streamline your operations. You can also access a library of 250+ IoT dashboard templates, stay informed with insightful blog updates, and explore its services through a live demo.
Datacake also offers several ways of connecting it to other platforms. You can integrate Datacake into other systems using MQTT, Webhooks, REST API, and GraphQL.
But, if Datacake is a full IoT platform, why would you connect it to another?
Well, there could be many reasons:
- ·You may want to store the data in your own database. Because you need redundancy, backup, legal compliance, etc.
- Maybe you need some visualization or analytics not included in Datacake.
- You want to build ML models using your data.
- You need the IoT system to interact with your business apps.
In this tutorial, we will use Datacake’s data to build dashboards in Grafana Cloud.
We will build a data pipeline from Datacake to Grafana using several tools and services.
In the following picture, you can see the architecture of the system.
Let’s see a brief description of each part of the system.
What is InfluxDB CloudInfluxDB Cloud is a powerful and user-friendly cloud-native platform designed to help you store, analyze, and visualize time-series data. With InfluxDB Cloud, you can easily collect data from various sources, like sensors, applications, or infrastructure, and then store it securely in a scalable and highly available database.
What is GrafanaGrafana is an observability platform that allows you to query, visualize, and alert on data from various sources. It doesn't require data ingestion into a specific backend store and can unify existing data from different sources.
In this case, we will use an InfluxDB Cloud database as the data source.
What is PipedreamPipedream is a platform that allows you to automate processes connecting APIs. You'll have code-level control for creating and running workflows. Pipedream includes a serverless runtime, workflow service, and source-available triggers and actions for integrated apps. You can easily set up one-click OAuth and key-based authentication for over 1000 APIs.
Building our integrationIn this tutorial, we suppose you already have the following:
- An account in Datacake and some devices connected to a Workspace.
- An account in Pipedream.
- An account in InfluxDB Cloud.
- An account in Grafana Cloud.
Let’s start by creating a new project in Pipedream’s console. Just name it and click on Create Project.
After you have your project, create a new Workflow inside. See Fig. 3. Give a name to your workflow and let the rest of the options by default.
Inside the workflow, create a new trigger. In this case, we will choose HTTP/Webhook. See Fig. 4.
After you select the HTTP/Webhook trigger, you will see the box shown in Figure 5. Click “Save and continue” using the default options.
Once you have created the webhook, Pipedream will show you a unique URL. You have to use this URL to send data from Datacake.
Configuring DatacakeSo, let’s go to Datacake to configure a new Webhook integration. Go to Integrations > Webhooks and create a new one.
In the ULR field of the new webhook ingress the ULR provided by Pipedream, and select the “Decoder Output” option. See Figure 7.
The webhook will send all the decoded information provided by the IoT device.
Now that you have a webhook in Datacake, you should start receiving data in Pipedream. You can check this by looking at the HTTP/Webhook trigger you created there. See Figure 8.
The next step consists of writing the received data into an InfluxDB bucket. But first, we have to prepare our InfluxDB database.
Configuring a new bucket in InfluxDB CloudSo, let’s go to InfluxDB Cloud and create a new bucket.
In InfluxDB Cloud, go to Load Data > Buckets. There you can see all the buckets that are already available to use.
Click on the button “CREATE BUCKET”, enter a name, and select the persistence you want for your data. Then click on “CREATE” and you are done. You now have a new bucket, ready to use with your Datacake data.
Now that you have the bucket, you have to create a new API token to bring access to it. Go to API Tokens and click on “GENERATE API TOKEN” and then on “Custom API Token”, as you can see in Figure 11.
This will lead you to the following screen. Here you have the select the writing and reading permissions to the bucket. See Figure 12.
After completing the configuration, click on “GENERATE”. Then you will see the generated token, as shown in Figure 13. Copy this token as it will not be available again.
One last step in InfluxDB Cloud. Go to Settings in your organization and copy the values of “Cluster URL” and “Organization ID”. You will need these to connect to InfluxDB. See Figure 14.
Now that you have configured InfluxDB and have all the necessary information, let’s go back to Pipedream.
Connecting Datacake and PipedreamBelow the HTTP/Webhook trigger add a new step. In this case, we will add a Python step. This Python step lets you run any Python script. See Figure 15.
We will use Python to get data from the previous step and write it in the InfluxDB database.
You can get the Python code from this link. https://github.com/ingrjhernandez/python-scripts-iot/blob/main/pipedream-webhook-influxdb.py
from pipedream.script_helpers import (steps, export)
import os, time
import influxdb_client
from influxdb_client.client.write_api import SYNCHRONOUS
#from influxdb_client import InfluxDBClient, Point, client.write_api
token = "YOUR-TOKEN"
org = "YOUR-ORG-ID"
url = "YOUR-INFLUXDB-URL"
client = influxdb_client.InfluxDBClient(
url=url,
token=token,
org=org
)
datacakedata = steps["trigger"]["event"]["body"]
sensordata = steps["trigger"]["event"]["body"]["data"]
print(steps["trigger"]["event"]["body"]["data"])
write_api = client.write_api(write_options=SYNCHRONOUS)
database="YOUR-BUCKET"
measurement = sensordata['device_name']
print(measurement)
for item in sensordata['result']:
field = item['field']
value = item['value']
print(f"Field: {field}, Value: {value}")
point = (
influxdb_client.Point(measurement).tag("serial",sensordata['device_serial']).field(field, value)
)
write_api.write(bucket=database, org=org, record=point)
time.sleep(1)
print("Complete. Return to the InfluxDB UI.")
Notice that you have to change the values of the variables token, org, URL, and database.
Now you can test and deploy the workflow. You should start seeing data in your InfluxDB bucket.
Building dashboards in GrafanaThe last part of this tutorial has to do with Grafana. We will add the InfluxDB bucket as a data source and then will build a new dashboard.
In Grafana, go to Connections > Add new connection, look for InfluxDB, and click on it.
In the InflxDB connection, click on the button “Add new data source”. This will lead you to the following screen, where you can configure the access to InfluxDB.
Give the data source a name, select Flux as the query language, and enter the URL of the InfluxDB instance.
Uncheck the Basic auth option and enter your organization ID, the token, and the bucket.
Notice that you can have many data sources with the same InfluxDB instance by performing queries on different buckets.
Now you can click on the “Save & test” button and check if the data source is working.
If so, you should see a notification like the one shown in Figure 20.
Now that we have a working data source, we can build our dashboard in Grafana. But before we start with it, let’s go to InfluxDB Cloud again to build our Flux queries.
In InfluxDB Cloud go to Data Explorer and select the bucket, the measurement, and the field using the graphical query builder.
Then click on “SUBMIT” to display the data on the dashboard.
Now that you checked that the query works, click on “SCRIPT EDITOR”. This will show you the Flux query in plain text. See Figure 22.
We will use this script in Grafana to perform the queries and show the data on the dashboard. So, copy the text of the Flux query.
Now go back to Grafana and create a new dashboard. Go to Dashboards, click on the button “New”, and select Dashboard.
Then, click on “Add visualization”. This will create a new panel where you can select the type of visualization, paste the query, and configure all the aspects of the visualization. See Fig. 24.
On the Data source, select the one we created for our InfluxDB bucket. Then, copy the Flux script into the query box. After running this query you will start to see data on the visualization panel.
Repeat this procedure for different measurements and fields to build your dashboard.
Take into account that in the Python script we set up the bucket data as follows:
- Measurements: Each measurement corresponds to a device.
- Fields: Every numerical value in the payload is related to a field.
- Tags: In this case, the only tag assigned corresponds to the serial number of the devices. You can define more, depending on the case of use. Location, model, etc., can be used as tags.
In Figure 25 you can see an example of a dashboard showing Datacake data.
In this tutorial, you have learned to build dashboards in Grafana using data coming from Datacake.
To do this, we used other two platforms: Pipedream and InfluxDB.
We built a full data pipeline through Datacake-Pipedream-InfluxDB-Grafana.
Finally, we created a dashboard using live and historical Datacake’s data.
Comments