Skip to content

Commit 01928a1

Browse files
committed
Update default data destination to S3
1 parent cefc4c3 commit 01928a1

10 files changed

+126
-64
lines changed

docs/dev-guide/can-over-someip-demo.md

Lines changed: 28 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ available to nodes on the Ethernet network.
77

88
This demo generates CAN data on a virtual CAN bus, and bridges this data onto SOME/IP. The Reference
99
Implementation for AWS IoT FleetWise (FWE) is then provisioned and run to collect the CAN data from
10-
SOME/IP and upload it to the cloud. The data is then downloaded from Amazon Timestream and plotted
11-
in an HTML graph format.
10+
SOME/IP and upload it to the cloud. The data is then downloaded from the specified data destination
11+
and plotted in an HTML graph format.
1212

1313
The following diagram illustrates the dataflow and artifacts consumed and produced by this demo:
1414

@@ -227,8 +227,8 @@ collect data from it.
227227
The demo script:
228228

229229
1. Registers your AWS account with AWS IoT FleetWise, if not already registered.
230-
1. Creates an Amazon Timestream database and table.
231-
1. Creates IAM role and policy required for the service to write data to Amazon Timestream.
230+
1. Creates an S3 bucket with a bucket policy that allows AWS IoT FleetWise to write data to the
231+
bucket.
232232
1. Creates a signal catalog based on `can-nodes.json`.
233233
1. Creates a model manifest that references the signal catalog with all of the CAN signals.
234234
1. Activates the model manifest.
@@ -242,12 +242,29 @@ collect data from it.
242242
1. Creates a campaign from `campaign-brake-event.json` that contains a condition-based collection
243243
scheme to capture the engine torque and the brake pressure when the brake pressure is above
244244
7000, and targets the campaign at the fleet.
245+
1. The data uploaded to S3 would be in JSON format, or Parquet format if the
246+
`--s3-format PARQUET` option is passed.
245247
1. Approves the campaign.
246248
1. Waits until the campaign status is `HEALTHY`, which means the campaign has been deployed to
247249
the fleet.
248-
1. Waits 30 seconds and then downloads the collected data from Amazon Timestream.
250+
1. Wait 20 minutes for the data to propagate to S3 and then download it.
249251
1. Saves the data to an HTML file.
250252

253+
If `TIMESTREAM` upload is enabled (**Note**: Amazon Timestream for Live Analytics is only
254+
available to customers who have already been onboarded in that region. See
255+
[the availability change documentation](https://docs.aws.amazon.com/timestream/latest/developerguide/AmazonTimestreamForLiveAnalytics-availability-change.html)),
256+
the demo script will instead:
257+
258+
1. Creates an Amazon Timestream database and table.
259+
1. Creates IAM role and policy required for the service to write data to Amazon Timestream.
260+
1. Creates a campaign from `campaign-brake-event.json` that contains a condition-based collection
261+
scheme to capture the engine torque and the brake pressure when the brake pressure is above
262+
7000, and targets the campaign at the fleet.
263+
1. Waits 30 seconds and then downloads the collected data from Amazon Timestream.
264+
1. Save the data to an HTML file.
265+
266+
This script will not delete Amazon Timestream or S3 resources.
267+
251268
1. When the script completes, a path to an HTML file is given. _On your local machine_, use `scp` to
252269
download it, then open it in your web browser:
253270

@@ -259,15 +276,18 @@ collect data from it.
259276
simulated brake pressure signal. As you can see that when hard braking events occur (value above
260277
7000), collection is triggered and the engine torque signal data is collected.
261278

262-
Alternatively, if your AWS account is enrolled with Amazon QuickSight or Amazon Managed Grafana,
263-
you may use them to browse the data from Amazon Timestream directly.
279+
Alternatively, if your upload destination was set to `TIMESTREAM` and AWS account is enrolled
280+
with Amazon QuickSight or Amazon Managed Grafana, you may use them to browse the data from Amazon
281+
Timestream directly. **Note**: Amazon Timestream for Live Analytics is only available to
282+
customers who have already been onboarded in that region. See
283+
[the availability change documentation](https://docs.aws.amazon.com/timestream/latest/developerguide/AmazonTimestreamForLiveAnalytics-availability-change.html).
264284

265285
![](./images/collected_data_plot.png)
266286

267287
## Clean up
268288

269289
1. Run the following _on the development machine_ to clean up resources created by the
270-
`provision.sh` and `demo.sh` scripts. **Note:** The Amazon Timestream resources are not deleted.
290+
`provision.sh` and `demo.sh` scripts. **Note:** The S3 resources are not deleted.
271291

272292
```bash
273293
cd ~/aws-iot-fleetwise-edge/tools/cloud \

docs/dev-guide/edge-agent-dev-guide-nxp-s32g.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -175,7 +175,7 @@ mkdir -p ~/aws-iot-fleetwise-deploy \
175175
## Clean up
176176
177177
1. Run the following _on the development machine_ to clean up resources created by the
178-
`provision.sh` and `demo.sh` scripts. **Note:** The Amazon Timestream resources are not deleted.
178+
`provision.sh` and `demo.sh` scripts. **Note:** The S3 resources are not deleted.
179179
180180
```bash
181181
cd ~/aws-iot-fleetwise-edge/tools/cloud \

docs/dev-guide/edge-agent-dev-guide-renesas-rcar-s4.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@ mkdir -p ~/aws-iot-fleetwise-deploy \
191191
## Clean up
192192

193193
1. Run the following _on the development machine_ to clean up resources created by the
194-
`provision.sh` and `demo.sh` scripts. **Note:** The Amazon Timestream resources are not deleted.
194+
`provision.sh` and `demo.sh` scripts. **Note:** The S3 resources are not deleted.
195195

196196
```bash
197197
cd ~/aws-iot-fleetwise-edge/tools/cloud \

docs/dev-guide/edge-agent-dev-guide.md

Lines changed: 53 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -168,9 +168,12 @@ collect data from it.
168168
--campaign-file campaign-brake-event.json
169169
```
170170

171-
- (Optional) To enable S3 upload, append the option `--data-destination S3`. By default the
172-
upload format will be JSON. You can change this to Parquet format for S3 by passing
173-
`--s3-format PARQUET`.
171+
- The default `--data-destination` is S3, and the default upload format is JSON. You can change
172+
this to Parquet format for S3 by passing `--s3-format PARQUET`.
173+
- (Optional) To enable Amazon Timestream as destination, add the flag
174+
`--data-destination TIMESTREAM`. **Note**: Amazon Timestream for Live Analytics is only
175+
available to customers who are already onboarded in that region. See
176+
[the availability change documentation](https://docs.aws.amazon.com/timestream/latest/developerguide/AmazonTimestreamForLiveAnalytics-availability-change.html).
174177
- (Optional) To enable IoT topic as destination, add the flag `--data-destination IOT_TOPIC`. To
175178
define the custom IoT topic use the flag `--iot-topic <TOPIC_NAME>`. Note: The IoT topic data
176179
destination is a "gated" feature of AWS IoT FleetWise for which you will need to request
@@ -188,8 +191,8 @@ collect data from it.
188191
The demo script:
189192

190193
1. Registers your AWS account with AWS IoT FleetWise, if not already registered.
191-
1. Creates an Amazon Timestream database and table.
192-
1. Creates IAM role and policy required for the service to write data to Amazon Timestream.
194+
1. Creates an S3 bucket with a bucket policy that allows AWS IoT FleetWise to write data to the
195+
bucket.
193196
1. Creates a signal catalog based on `can-nodes.json`.
194197
1. Creates a model manifest that references the signal catalog with all of the CAN signals.
195198
1. Activates the model manifest.
@@ -203,19 +206,25 @@ collect data from it.
203206
1. Creates a campaign from `campaign-brake-event.json` that contains a condition-based collection
204207
scheme to capture the engine torque and the brake pressure when the brake pressure is above
205208
7000, and targets the campaign at the fleet.
209+
1. The data uploaded to S3 would be in JSON format, or Parquet format if the
210+
`--s3-format PARQUET` option is passed.
206211
1. Approves the campaign.
207212
1. Waits until the campaign status is `HEALTHY`, which means the campaign has been deployed to
208213
the fleet.
209-
1. Waits 30 seconds and then downloads the collected data from Amazon Timestream.
214+
1. Wait 20 minutes for the data to propagate to S3 and then download it.
210215
1. Saves the data to an HTML file.
211216

212-
If S3 upload is enabled, the demo script will instead:
217+
If `TIMESTREAM` upload is enabled (**Note**: Amazon Timestream for Live Analytics is only
218+
available to customers who have already been onboarded in that region. See
219+
[the availability change documentation](https://docs.aws.amazon.com/timestream/latest/developerguide/AmazonTimestreamForLiveAnalytics-availability-change.html)),
220+
the demo script will instead:
213221

214-
1. Create an S3 bucket with a bucket policy that allows AWS IoT FleetWise to write data to the
215-
bucket.
216-
1. Creates a campaign from `campaign-brake-event.json` to upload the data to S3 in JSON format,
217-
or Parquet format if the `--s3-format PARQUET` option is passed.
218-
1. Wait 20 minutes for the data to propagate to S3 and then download it.
222+
1. Creates an Amazon Timestream database and table.
223+
1. Creates IAM role and policy required for the service to write data to Amazon Timestream.
224+
1. Creates a campaign from `campaign-brake-event.json` that contains a condition-based collection
225+
scheme to capture the engine torque and the brake pressure when the brake pressure is above
226+
7000, and targets the campaign at the fleet.
227+
1. Waits 30 seconds and then downloads the collected data from Amazon Timestream.
219228
1. Save the data to an HTML file.
220229

221230
This script will not delete Amazon Timestream or S3 resources.
@@ -229,8 +238,11 @@ collect data from it.
229238
simulated brake pressure signal. As you can see that when hard braking events occur (value above
230239
7000), collection is triggered and the engine torque signal data is collected.
231240

232-
Alternatively, if your AWS account is enrolled with Amazon QuickSight or Amazon Managed Grafana,
233-
you may use them to browse the data from Amazon Timestream directly.
241+
Alternatively, if your upload destination was set to `TIMESTREAM` and AWS account is enrolled
242+
with Amazon QuickSight or Amazon Managed Grafana, you may use them to browse the data from Amazon
243+
Timestream directly. **Note**: Amazon Timestream for Live Analytics is only available to
244+
customers who have already been onboarded in that region. See
245+
[the availability change documentation](https://docs.aws.amazon.com/timestream/latest/developerguide/AmazonTimestreamForLiveAnalytics-availability-change.html).
234246

235247
![](./images/collected_data_plot.png)
236248

@@ -452,9 +464,12 @@ collect data from it.
452464
--campaign-file campaign-brake-event.json
453465
```
454466

455-
- (Optional) To enable S3 upload, append the option `--data-destination S3`. By default the
456-
upload format will be JSON. You can change this to Parquet format by passing
457-
`--s3-format PARQUET`.
467+
- The default `--data-destination` is S3, and the default upload format is JSON. You can change
468+
this to Parquet format for S3 by passing `--s3-format PARQUET`.
469+
- (Optional) To enable Amazon Timestream as destination, add the flag
470+
`--data-destination TIMESTREAM`. **Note**: Amazon Timestream for Live Analytics is only
471+
available to customers who are already onboarded in that region. See
472+
[the availability change documentation](https://docs.aws.amazon.com/timestream/latest/developerguide/AmazonTimestreamForLiveAnalytics-availability-change.html).
458473
- (Optional) To enable IoT topic as destination, add the flag `--data-destination IOT_TOPIC` To
459474
define the custom IoT topic use the flag `--iot-topic <TOPIC_NAME>`. Note: The IoT topic data
460475
destination is a "gated" feature of AWS IoT FleetWise for which you will need to request
@@ -468,8 +483,8 @@ collect data from it.
468483
The demo script:
469484

470485
1. Registers your AWS account with AWS IoT FleetWise, if not already registered.
471-
1. Creates an Amazon Timestream database and table.
472-
1. Creates IAM role and policy required for the service to write data to Amazon Timestream.
486+
1. Creates an S3 bucket with a bucket policy that allows AWS IoT FleetWise to write data to the
487+
bucket.
473488
1. Creates a signal catalog based on `can-nodes.json`.
474489
1. Creates a model manifest that references the signal catalog with all of the CAN signals.
475490
1. Activates the model manifest.
@@ -483,22 +498,27 @@ collect data from it.
483498
1. Creates a campaign from `campaign-brake-event.json` that contains a condition-based collection
484499
scheme to capture the engine torque and the brake pressure when the brake pressure is above
485500
7000, and targets the campaign at the fleet.
501+
1. The data uploaded to S3 would be in JSON format, or Parquet format if the
502+
`--s3-format PARQUET` option is passed.
486503
1. Approves the campaign.
487504
1. Waits until the campaign status is `HEALTHY`, which means the campaign has been deployed to
488505
the fleet.
489-
1. Waits 30 seconds and then downloads the collected data from Amazon Timestream.
506+
1. Wait 20 minutes for the data to propagate to S3 and then download it.
490507
1. Saves the data to an HTML file.
491508

492-
If S3 upload is enabled, the demo script will additionally:
509+
If `TIMESTREAM` upload is enabled, the demo script will instead:
493510

494-
1. Create an S3 bucket with a bucket policy that allows AWS IoT FleetWise to write data to the
495-
bucket.
496-
1. Creates an additional campaign from `campaign-brake-event.json` to upload the data to S3 in
497-
JSON format, or Parquet format if the `--s3-format PARQUET` option is passed.
498-
1. Wait 20 minutes for the data to propagate to S3 and then download it.
499-
1. Save the data to an HTML file.
511+
**Note**: Amazon Timestream for Live Analytics is only available to customers who have already
512+
been onboarded in that region. See
513+
[the availability change documentation](https://docs.aws.amazon.com/timestream/latest/developerguide/AmazonTimestreamForLiveAnalytics-availability-change.html).
500514

501-
This script will not delete Amazon Timestream or S3 resources.
515+
1. Creates an Amazon Timestream database and table.
516+
1. Creates IAM role and policy required for the service to write data to Amazon Timestream.
517+
1. Creates a campaign from `campaign-brake-event.json` that contains a condition-based collection
518+
scheme to capture the engine torque and the brake pressure when the brake pressure is above
519+
7000, and targets the campaign at the fleet.
520+
1. Waits 30 seconds and then downloads the collected data from Amazon Timestream.
521+
1. Save the data to an HTML file. This script will not delete Amazon Timestream or S3 resources.
502522

503523
1. When the script completes, a path to an HTML file is given. _On your local machine_, use `scp` to
504524
download it, then open it in your web browser:
@@ -511,8 +531,11 @@ collect data from it.
511531
simulated brake pressure signal. As you can see that when hard braking events occur (value above
512532
7000), collection is triggered and the engine torque signal data is collected.
513533

514-
Alternatively, if your AWS account is enrolled with Amazon QuickSight or Amazon Managed Grafana,
515-
you may use them to browse the data from Amazon Timestream directly.
534+
Alternatively, if your upload destination was set to `TIMESTREAM` and AWS account is enrolled
535+
with Amazon QuickSight or Amazon Managed Grafana, you may use them to browse the data from Amazon
536+
Timestream directly. **Note**: Amazon Timestream for Live Analytics is only available to
537+
customers who have already been onboarded in that region. See
538+
[the availability change documentation](https://docs.aws.amazon.com/timestream/latest/developerguide/AmazonTimestreamForLiveAnalytics-availability-change.html).
516539

517540
![](./images/collected_data_plot.png)
518541

Loading
Loading

docs/dev-guide/store-and-forward-dev-guide.md

Lines changed: 15 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -116,16 +116,19 @@ previous section.
116116
The demo script will save a set of varables about the run to a demo.env file, which can be used
117117
by other scripts such as request-forward.sh or cleanup.sh.
118118

119-
When the script completes, you can view the data forwarded by the second and third campaigns in
120-
Timestream, as well as the demo output file(s). A path to an HTML file is given. _On your local
121-
machine_, use `scp` to download it, then open it in your web browser:
119+
When the script completes, you can view the data forwarded by the second and third campaigns to
120+
S3, as well as the demo output file(s). A path to an HTML file is given. _On your local machine_,
121+
use `scp` to download it, then open it in your web browser:
122122

123123
```bash
124124
scp -i <PATH_TO_PEM> ubuntu@<EC2_IP_ADDRESS>:<PATH_TO_HTML_FILE> .
125125
```
126126

127-
Alternatively, if your AWS account is enrolled with Amazon QuickSight or Amazon Managed Grafana,
128-
you may use them to browse the data from Amazon Timestream directly.
127+
Alternatively, if your upload destination was set to `TIMESTREAM` and AWS account is enrolled
128+
with Amazon QuickSight or Amazon Managed Grafana, you may use them to browse the data from Amazon
129+
Timestream directly. **Note**: Amazon Timestream for Live Analytics is only available to
130+
customers who have already been onboarded in that region. See
131+
[the availability change documentation](https://docs.aws.amazon.com/timestream/latest/developerguide/AmazonTimestreamForLiveAnalytics-availability-change.html).
129132

130133
As you explore the forwarded data, you can see the brake data from campaign 2 as well as the
131134
network type signal from campaign 3.
@@ -163,16 +166,18 @@ end time will be passed through to the IoT Job document by the script. The scrip
163166
argument must be an ISO 8601 UTC formatted time string. e.g. `--end-time 2024-05-25T01:21:23Z`
164167

165168
When the request-forward.sh script completes, you can view the data forwarded by the first campaign
166-
which was requested through IoT Jobs in Timestream, as well as the demo output file(s). A path to an
167-
HTML file is given. _On your local machine_, use `scp` to download it, then open it in your web
168-
browser:
169+
which was requested through IoT Jobs in S3, as well as the demo output file(s). A path to an HTML
170+
file is given. _On your local machine_, use `scp` to download it, then open it in your web browser:
169171

170172
```bash
171173
scp -i <PATH_TO_PEM> ubuntu@<EC2_IP_ADDRESS>:<PATH_TO_HTML_FILE> .
172174
```
173175

174-
Alternatively, if your AWS account is enrolled with Amazon QuickSight or Amazon Managed Grafana, you
175-
may use them to browse the data from Amazon Timestream directly.
176+
Alternatively, if your upload destination was set to `TIMESTREAM` and AWS account is enrolled with
177+
Amazon QuickSight or Amazon Managed Grafana, you may use them to browse the data from Amazon
178+
Timestream directly. **Note**: Amazon Timestream for Live Analytics is only available to customers
179+
who have already been onboarded in that region. See
180+
[the availability change documentation](https://docs.aws.amazon.com/timestream/latest/developerguide/AmazonTimestreamForLiveAnalytics-availability-change.html).
176181

177182
As you explore the forwarded data, you can now see the engine data from campaign 1, in addition to
178183
the data from the other campaigns.

0 commit comments

Comments
 (0)