Architecture Question

  • Multiple industrial sites with occasional network connectivity issues
  • Multiple legacy control systems in place, all support OPC
  • Business users want to see some process tags in a cloud BI system
  • Custom tool deployed on-site that reads OPC and generates CSV logs that are then pushed into the cloud BI tool every 15 minutes
  • Complex flows in the BI ETL layer to process, clean and store the TS Data and transform it into useful information

What I would like to achieve:

  • Robust store and forward at the site level to avoid risking data loss due to connectivity disruptions
  • A unified namespace applied to tags to provide a clean organizational structure (hierarchy, data type metadata etc.)
  • The ability to support event detection and trigger an incident management system

It seems that this could be achieved using either the Canary OPC Collector on-site, OR an MQTT collector on-site pulling data through HighByte to apply the UNS, with the data then flowing to the BI system using the ODBC connector OR MQTT.

Is there another approach that I could consider?

What are the pros and cons of these approaches?

Thanks in advance!


1reply Oldest first
  • Oldest first
  • Newest first
  • Active threads
  • Popular
  • Hi Matt Markham ,

    Each one of our collectors uses our store and forward technology which has been proven to be quite robust. Our Sender service is responsible for forwarding the data along to the Receiver/Historian. When that communication breaks, the Sender will cache data to disk until it can reestablish the connection.

    When it comes to UNS, we typically use the metric name that is defined in the OPC server or what is coming from the publishing device. In my experience, OPC isn't always defined as nice as what is coming through MQTT,  but we do offer the ability to "alias" or change the tag name within our OPC collectors. Whether you're using OPC or MQTT though, Canary has the ability to create its own structure through our Views service. So if the tag structure coming from the data source is not desired, you can create virtual views on top of the raw data to transform it into the structure you want. We make use of Regular Expressions to match upon string patterns and change the structure. It is also within these virtual views where we define what your different assets are. (See Virtual Views and Asset Models.)

    As for event detection, we do have a Calcs & Event service. Events are conditioned-based and send out an email when these conditions have been met. We store our events in a sqlite db, but it can be configured to store in an external SQL db.

    Once data is stored in the historian, it can be extracted using our ODBC service, our Read API, or using our Publisher service. Here are some helpful links for more information on those possibilities:




    Like 1
print this pagePrint this page
Like Follow
  • 11 mths agoLast active
  • 1Replies
  • 148Views
  • 2 Following