Tech Titans Have Differing Perspectives on AI’s Future – Part I of an Exploration in AI

There is a much-hyped debate between two of the world’s most notable tech entrepreneurs, centered on the risks inherent in the rapid development of Artificial Intelligence (AI). This highly publicized row involves the dueling perspectives of Mark Zuckerberg, Founder and Chief Executive Officer of Facebook, and Elon Musk, most notably co-founder and Chief Executive Officer at Tesla and Space Exploration Technologies (also known as SpaceX). Mr. Zuckerberg is optimistic and eager to advance AI as Facebook chases its goal of building a global community, while Mr. Musk is concerned and cautionary about AI’s potential threats to humanity if its growth is unchecked, its power too centralized.

Their arguments are compelling and important to understand, as is the nature of AI technology itself.

Artificial Intelligence, the Enigma

Though the term “AI” is universally known, it is not widely understood by the general public. This could be because AI technology is in its infancy, and much of what the average person knows about AI comes via media coverage. Research groups such as The Future of Life Institute (FLI) aim to improve our collective understanding of AI and its capabilities, not only for the sake of educating people, but also so we may develop AI in a manner that is beneficial and safe for humanity. In the context of Mr. Zuckerberg’s and Mr. Musk’s debate, FLI provides valuable grounding information.

Founded by tech entrepreneurs and academics in 2014, The Future of Life Institute’s mission is “To catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.” With a current focus on mitigating risks associated with nuclear weapons and biotechnology, FLI’s team consists of five founding members: Jaan Tallinn, Co-founder at Skype; Max Tegmark, Professor at MIT; Viktoriya Krakovna, Research Scientist at DeepMind; Anthony Aguirre, Professor at UC Santa Cruz; and Meia Chita-Tegmark, PhD candidate at Boston University. There is also a multidisciplinary advisory board of 13 individuals, which happens to include Mr. Musk among its ranks.

Future of Life Institute
Future of Life Institute

FLI categorizes AI into two subtypes based on functionality. Most prevalent today is AI software that can perform singular tasks, such as recognizing a person’s unique facial features or driving a car. This subtype is known as narrow AI or weak AI. Humans partner with narrow AI for enhanced productivity, and they are not fully replaced by it. The future of AI lies in the evolution of artificial general intelligence (AGI) or strong AI, which should be able to perform more complex tasks and rely less on human partnership. While the definition of AGI is not universally agreed upon (after all, it is still largely a concept and not a reality), there is a general consensus that AGI will exhibit the following capabilities at the least:

  • In the face of uncertainty, demonstrate the ability to reason and make judgments
  • Utilize common sense knowledge when making decisions
  • Plan for the future, and consider that future when taking action
  • Learn and retain knowledge from experience
  • Communicate in a natural language

To perform these tasks well, some experts believe that AGI will need to develop intelligence approaching consciousness. Such an advancement is an exciting prospect for the future, and also the crux of the highly publicized debate between Mark Zuckerberg and Elon Musk.

Mark Zuckerberg: AI, the Tech Panacea

Mark Zuckerberg holds an optimistic view of the future of AI and believes AI is key to solving challenges faced by his ubiquitous company. In April of 2018, Mr. Zuckerberg testified before the United States Congress about Facebook, and several times referred to AI as a means to make the platform more secure while fostering a global community. Nearly a year prior, Mr. Zuckerberg hosted a Facebook Live event during which he touched on his positive outlook with regard to AI. “In the next five to ten years, AI is going to deliver so many improvements in the quality of our lives,” he said.

Mr. Zuckerberg’s company is pushing significant resources into AI research efforts, which are carried out by a division called Facebook Artificial Intelligence Research (FAIR). By the year 2020, a staff of roughly 180 to 200 people will grow to about 400, lead by Chief AI Scientist, Yann LeCun. In addition to his duties at Facebook, Mr. LeCun is a part-time Silver Professor at New York University, among other things. In fact, any members of the multi-national, inter-disciplinary team at FAIR have ties to the world of academia. Facebook has drawn criticism for its aggressive recruitment of AI researchers, specifically due to the pressure placed on universities and non-profit AI research groups.

According to Terena Bell, a freelance journalist specializing in AI who writes for online tech publication CIO, Facebook is advancing AI in the following areas:

  • Connecting our brains to the software via hardware like Google Glass or Snap Spectacles
  • Providing video content recommendations based on your and your friends’ behavior on Facebook
  • Eliminating issues with the Facebook Live feature, namely troublesome broadcast subject matter
  • Identifying pornographic content using visual recognition technology
  • Protecting Facebook’s leader, Mr. Zuckerberg, from the 14 million threats he receives annually
  • Managing visitors to Facebook’s offices, specifically allowing employees to use a chat platform to report threats

With varying degrees of success, Facebook also uses AI to identify terrorist propaganda, political spammers and fraudulent accounts, and other potential threats. However, these efforts require an understanding of linguistic and social nuance, an area that remains challenging for AI, so human intervention is still necessary. In addition to its AI development efforts, Facebook employs a sizeable group of content moderators.

Elon Musk: AI, the Necessary Evil

Elon Musk decries the dangers inherent in AGI and has been a vocal proponent of regulating the technology as it is developed. Appearing in a 2018 documentary entitled ‘Do You Trust This Computer?’ Mr. Musk warns “The least scary future I can think of is one where we have at least democratized AI because if one company or small group of people manages to develop godlike superintelligence, they could take over the world.” He also holds a place on the 13-member FLI advisory board, presumably to assist with building a future far better than the least scary one he describes.

When it comes to fears of AI’s potential as a threat to humanity, FLI identifies two scenarios:

  1. Humans deliberately program the AI to perform a devastating task, particularly in the context of warfare
  2. In the performance of a beneficial task, the AI develops a destructive methodology and resists intervention

Mr. Musk, characterized by Vanity Fair’s Maureen Dowd as “a leading doomsayer”, has been attempting to make the case for slowing the development of advanced AI for the sake of enacting regulatory safeguards, no doubt because he believes that we are in grave danger of the above scenarios becoming a reality. These efforts began during a speaking engagement at the Massachusetts Institute of Technology (MIT) in 2014, and have evolved over the years as the entrepreneur explores ways to mitigate the threats he perceives.

In 2016, Mr. Musk co-founded a tech start-up called Neuralink, for the purpose of developing a means for the human brain to interface with computers. Appearing on an installment of the popular podcast “The Joe Rogan Experience”, he repeated the warning “Best-case scenario, we effectively merge with AI, where AI serves as a tertiary cognition layer. It will enable anyone who wants to have superhuman cognition.” Therefore, in addition to democratizing and distributing AI to avoid concentrating the power with a small group of people, integration with the technology at the most intimate level will provide the regulation Mr. Musk demands. “I tried to convince people to slow down AI, to regulate AI, but this was futile. I tried for years. Nobody listened.”

As illustrated in the graphic below, Mr. Musk has good company in his cautionary stance, though he is by far the most famously vocal.

Vanity Fair
Vanity Fair

The Debate Rages On

While Mark Zuckerberg invests countless resources into the rapid development of AGI to replace human effort at Facebook, Elon Musk continues to advocate for keeping the balance of power tipped in favor of humans. At this point, neither tech titan shows signs of acquiescing to the other’s point of view, and it is undoubtedly worth exploring the positions of other tech titans like Bill Gates and Larry Page to further understand the AGI development landscape. One thing is clear: This debate is an invaluable opportunity for the public to educate themselves about AI and participate in the conversation about how it will shape our world’s future.

Biofabrication on the Cusp of Mass Production

A new paradigm in manufacturing is steadily growing thanks to partnerships among academics, industry leaders, and government entities. In laboratories across the globe, breakthroughs in the engineering of biomaterials are giving glimpses into the future of regenerative medicine, which is a means of treating disease and saving lives that harnesses the power of the body’s ability to heal itself and marries it to cutting-edge technology. As the healthcare and manufacturing industries combine their powers to address a mounting global health crisis, the line between science fiction and science fact is blurred for the benefit of everyone.

An Ever-Increasing Demand for Organ and Tissue Donations

It wasn’t long ago that tissue and organ transplantations were considered medical breakthroughs. According to the U.S. Department of Health and Human Services, the first transplant of skin occurred in 1869 and the first transplant of a solid organ – a kidney – took place in 1954. In the ensuing decades, the demand for replacement body parts increased dramatically in the U.S., yet the supply has not kept pace with it. In 2006, the Institute of Medicine (IOM) published a book titled “Organ Donation which put forth recommendations for increasing the availability of donated organs. Despite heightened public awareness and the efforts of organizations such as the Organ Procurement and Transplantation Network (OPTN), the gap between those in need and those who can provide continues to widen. It is in this space that advances in regenerative medicine and the emerging technologies that make up biofabrication are poised to save countless lives.

Organ Donation Chart
Image Source:

The Path Forward via Regenerative Medicine

Organ transplantation is a technique within the field of regenerative medicine, which also encompasses the application of biochemical techniques to induce tissue regeneration, as well as the use of differentiated cells (e.g. stem cells) either alone or as part of bioartificial (i.e. engineered) tissue. Ideally, a patient’s own cells are used to avoid the complications associated with immune system rejection. Use of another person’s organs, tissues, or cells requires an extensive matching process, as well as administration of immunosuppressive drugs to curb the rejection risk. Research efforts in the realm of regenerative medicine are evolving new technologies and techniques for providing patients with desperately needed tissues and organs, yet the pathway to mass production has not been straightforward. Until now.

The next step on the path to bringing regenerative medicine to the masses is through the process of biofabrication.

What is Biofabrication?

Biofabrication is a type of manufacturing – also referred to as biomanufacturing – that combines the disciplines of mechanical engineering, biomaterials science, cell and developmental biology, computer science, and materials science, to name a few. It involves the creation of complex biological products from raw materials such as living cells, biomaterials, extracellular matrices, and molecules. In the context of addressing the need for donated tissue and organs, biofabrication can create safe and effective products from a patient’s own raw materials, therefore reducing the chances of rejection by their immune system. So far, the majority of the work in biofabrication has taken place in laboratories, and output has been limited. The great challenge is scaling biofabrication to a level at which manufacturing output can meet demands while maintaining compliance with U.S. FDA regulations.

All of the technologies that are emerging as part of the Fourth Industrial Revolution, including big data analytics, autonomous robots, simulation, horizontal and vertical system integration, the Industrial Internet of Things (IIoT), cybersecurity, the cloud, additive manufacturing, and augmented reality, will be utilized and pushed to new limits in the service of scaling biofabrication. Global businesses have been investing in these technologies for the sake of advancement in their respective markets, and now is a tremendous opportunity to use these capabilities in partnership with medical research entities and government entities to save lives.

Medical Technologies
Medical Technologies

BioFabUSA Creates a Multidisciplinary National Consortium

According to Dean Kamen, a world-famous inventor and the president of DEKA Research & Development, “There have been significant breakthroughs in cell biology, biofabrication, and materials science in the last decades, which have laid the foundation for large-scale manufacturing and commercialization of engineered tissue-related technologies, including tissue and organs. Now it is time to move out of the lab and into the factory.”

To jump-start this effort, the United States Department of Defense (DoD) in 2016 awarded $80 million in federal funding to the Advanced Regenerative Manufacturing Institute (ARMI) for the establishment of an Advanced Tissue Biofabrication (ATB) Manufacturing USA Institute. This program, known as BioFabUSA, is a made up of 47 industrial partners, 26 academic and academically-affiliated partners, and 14 government and non-profit partners. Its mission and purpose are to “help others with whatever they need to create the product be it knowledge, technology, equipment, process, and standards – anything needed to address the ecosystem for a new industry,” explains Kamen. His organization, FIRST, is one of BioFabUSA’s non-profit partners.

Scaling the advances in regenerative medicine to meet public health demand, and in the process growing a brand new industry, requires a multidisciplinary approach with an emphasis on scalability. The scope of BioFabUSA’s efforts will focus on five so-called thrust areas:

  • Cell selection, cell culture, and cell scale-up
  • Biomaterial selection and biomaterial scale-up
  • Tissue process automation and process monitoring
  • Tissue maturing technologies
  • Tissue preservation and tissue transport

With high ambitions, BioFabUSA plans to not only advance biofabrication to new heights but also to provide educational opportunities for rising and existing workforce talent, in the hopes of heading off the inevitable skills gap that will be created by rapid technological advancements. Due to the multidisciplinary nature of large-scale biofabrication, these opportunities will need to cover a broad spectrum of disciplines, from computer science to life science and beyond. Minding the skills gap while building the biofabrication industry is not only smart, it is imperative.

Printing Kidneys at Scale

Dean Kamen acknowledges that “it takes a force of nature to move an idea from the lab to the factory. With collaboration among the members of ARMI/BioFabUSA, we feel there will be significant breakthroughs in the next five to ten years – maybe sooner.  Imagine how healthcare would change if we could print a new kidney or liver for you when you needed one.” As it turns out, this statement is far from conjecture.

3D Organ Printing
Image Source: TechCrunch

Harnessing the power of additive manufacturing, which is also known as 3D printing, companies are beginning to see success in creating functioning biomaterial that can replace complex body parts, including kidneys. To get there, however, requires the perfection of the techniques and technologies needed to create the tiniest of biological structures, namely capillaries and specialized cells. Success requires a multifaceted approach, and mass production is the end game. A white paper published by Prellis Biologics, a small startup and alumnus of the IndieBio Accelerator program in San Francisco, CA, identifies four requirements for 3D printing functional human organs at scale:

  1. Resolution: The ability to create tissue, organs, and extracellular matrix, needs to fall within the range of a single cell, which is between one to ten microns.
  2. Speed: Printing times must be compatible with the health of the cell structures being printed, which posses unique sensitivities and lifetime constraints.
  3. Complexity: Engineered tissue needs to match the complex structural components of the living tissue it is intended to replace, and be capable of providing nuanced functionality.
  4. Biocompatibility: The product must be compatible with the patient’s immune system, be structurally sound, and capable of operating within the physiological requirements of a natural organ.

Co-founder Melanie Matheu, a research scientist interviewed by TechCrunch, estimates the global tissue engineering market will grow from $23 billion in 2015 to $94 billion by 2024. Though the work at Prellis is aimed at printing kidneys, the innovations achieved through the company’s efforts are applicable throughout the field of biofabrication. The promise of lowering health care costs and saving lives worldwide is within reach.

Work with a High Purpose

Dave Vasko, director of Advanced Technology at Rockwell Automation, an industry partner in the BioFabUSA program, explains “As quickly as the possibilities are unfolding, the advances largely are still in the research mode. The good news: While the recipes are incredibly specialized, the process of regenerative medicine manufacturing looks a lot like something you’d see in the making of a craft beer – automating the process and using data and analytics to monitor and improve that process to create consistent predictable results.” Through collaborative efforts like those of BioFabUSA, as part of the greater Manufacturing USA institute network, printing kidneys in a cost-effective manner at scale will soon become a reality.

The business of saving lives may be complex, but it need not be daunting. Thanks to proactive investments by entities and individuals, a burgeoning technological landscape ripe for disruption, and the power of human good will, regenerative medicine is being given a massive platform to transform healthcare at scale and bolster the global economy with the addition of new industry. Dave Vasko said it best: “This is work with a high purpose.”

Choosing a SCADA system


This article will go over the important points about choosing a SCADA system and evaluate one of the more cost effective solutions out there, Inductive Automation’s Ignition.

What is important for a SCADA system?

Price to Performance Ratio

The customer usually doesn’t mind paying a little more for a system that delivers more performance, has more features, or is more flexible than the competition. Also, the system has a whole has to be taken into account. For example, a SCADA software package could be relatively low in cost but the operating system that the server has to run on may be expensive. Additionally, the database software license may tip the scale too far. Sometimes the SCADA system requires multiple servers that run in tandem multiplying the other “auxiliary” costs across the install, and a lot of times this cost is not discussed or disclosed up-front. The user needs to be a little tech savvy to think of the right questions to ask: how many physical servers, operating system licenses, or how much are client licenses? A lot of times the licensing costs and complexity can be very confusing and frustrating for the customer.

Inductive Automation’s license model is very simple, straightforward, and has a great price to performance ratio. There are a few levels and bolt on modules for the system, but the customer knows what they are getting and paying for. The capabilities of the base system fall in comparison to the other competitors for the same cost.

Ignition is an “unlimited client, unlimited tag, and unlimited developer” system. They recommend one server per facility and if redundancy is needed it is half of the price. There are a lot of competitors that are pretty frustrated by this licensing model and it is why Ignition is being adopted a lot for new projects.

Cloud Hosting

“Times are a-changing” and everything is moving into the cloud. Information technology departments want to spend more time on things like cybersecurity, network upgrades, and IP camera projects, as well as less time, maintaining physical servers; which is understandable. Data center space and computing power are coming down in price and corporations are beginning to see the value in investing in off-site computing resources. So what does that mean for a SCADA server? Well, for one it means better maintenance, but for another, it means higher potential latency. Typically, the SCADA server has been tucked behind the maintenance manager’s workstation – and totally forgotten about. There is another side of this coin, since these “updates” or “new features” sometimes break system functionality.

Some SCADA servers don’t have the ability to be cloud-hosted, so if that is a must, the designer needs to check and make sure that the system is capable of it.


With cloud hosting, the user experience is only going to be as good as the internet connection or the connection to the cloud server. The user really needs to prioritize their data and think about the path of that data as it moves through the system.

PLC Communication

There are a lot of SCADA systems out there that talk to controllers, however, time needs to be taken to make sure that the SCADA server will be able to communicate to the controller. A SCADA system is pretty much useless without the ability to get data back and forth from the controller(s). Allen Bradley controllers require an OPC-UA communication driver to be able to communicate via ethernet and not all SCADA systems inherently have those drivers and building drivers may not always be time sensitive or cost-effective.

All controllers have to be evaluated in this manner. A facility or plant may have 10 to 50 controllers inside of it, and they may not always be the same model or even the same manufacturer. When selecting a SCADA system it is important to take all controllers (and communication media) into account.

Ignition has a strong driver set for Allen Bradley controllers. They have an OPC-UA driver for the new ControlLogix & CompactLogix family as well as legacy drivers for MicroLogix / SLC processors.

Here is a list of PLC controllers that Ignition supports right out of the box:

  • Allen-Bradley Logix
  • Allen-Bradley MicroLogix
  • Allen-Bradley PLC5
  • Allen-Bradley SLC, DNP3
  • Legacy Allen-Bradley CompactLogix
  • Legacy Allen-Bradley ControlLogix
  • Modbus RTU over TCP
  • Modbus TCP
  • Omron NJ Driver
  • Siemens S7-1200
  • Siemens S7-1500
  • Siemens S7-300
  • Siemens S7-400
  • Generic TCP Driver
  • UDP Driver


Ignition supports MqTT protocol. This offers in a whole new wave of IoT devices. There are not a lot of SCADA systems that currently support this.

Alarms and Notifications

One of the most important aspects of SCADA systems is the operator or maintenance personnel getting alarms and notifications. Almost all SCADA applications offer this as a core functionality, however, some are more sophisticated than others. For example, FactoryTalk alarm and events integrate great with ControlLogix and CompactLogix processors to bring the alarm to the HMI but that is typically where it stops.

With Ignition, the designer can create what are called Rosters and Alarm Pipelines. Using these they can create sophisticated alarm scenarios based on worker schedule, the severity of alarm, or any other custom attribute the user would like to add. Alarms can be configured for any analog or digital point, and when configuring analog alarms the designer can use the built-in SCADA memory tag limits.

Emailing Alarms

Ignition’s alarm pipelines support email, and with a valid email address support reply emails. The operator can acknowledge an alarm through email if the designer sets it up that way. Other SCADA systems may have the ability to email alarm notifications, but to be sure the designer should verify before selecting if this is important to the customer.

Data & Reporting

SCADA systems generate a lot of data. This data is typically brought back to the operator in the form of a displayed value or trend. Some SCADA systems use standard relational databases, for example, Microsoft SQL Server, MySQL, or PostgreSQL, and others use some non-typical things like Oracle or OsiPI. These non-typical databases make it difficult to develop third-party reports or web reporting portal.

Inductive Automation’s ignition SCADA system has the ability inherently to use the following databases:

Ignition Database Support
Figure 1: Ignition Database Support

Ignition will support additional databases, and the user can add more drivers through the management web site.


Ignition does not come with report generation in the base package. However, the module is pretty low cost considering the time and labor it would take to create one from scratch. However, because the database is in a standard format, it would possible to create a custom reporting portal. Or have a report generated through SQL Server Reporting Services on a schedule or something similar. There are not a lot of SCADA systems that have built-in reporting ability, and the competitors offer this for a substantial amount more money than ignition.

Scripting Languages

All SCADA systems have a scripting layer. Some handle scripts much better than others. For example, GE iFix and Rockwell’s FactoryTalk View SE both use VBA; the former handling scripting better than the latter. Wonderware uses .NET Framework (C#), and Inductive Automation’s Ignition uses Python/Java/Jython. Jython is a combination of python and java; python syntax with access to the Java libraries.

The designer and developer need to be comfortable in the environment they choose, or they will have a hard time at least initially.

Data Driven

When a SCADA system is data-driven, really cool things can happen with the user experience. The designer can create dynamic screens and let the user select what they want to see and what is important to them. Then, they can save that environment for the next time they log in. With Ignition, you can bind just about anything to either tag data or data from the database. As a comparison, FactoryTalk View SE, you can ONLY bind to tag data – and even those properties are limited.

Industry 4.0 and the Future of Jobs

In the midst of the Fourth Industrial Revolution, known colloquially as Industry 4.0, the World Economic Forum (WEF) has released a report titled The Future of Jobs Report 2018, which forecasts how global employment will be impacted by emerging technologies. Optimistic in its findings, the WEF expects a net positive for job growth as long as businesses, employees, and governments are proactive and agile in their efforts to align human talent with an evolving job market. Rather than being wholly replaced by robots, humans have an opportunity to merge their talents with available technologies for the purpose of enhancing bottom lines, but it is up to businesses and governments to create an environment conducive to sustainable growth.

The World Economic Forum

Based in Switzerland, the World Economic Forum (WEF) was founded in 1971 and is a non-political membership organization comprised of the world’s top public and private companies. The WEF focuses its activities on understanding the strategic challenges of:

  1. Making sense of the Fourth Industrial Revolution
  2. Solving the problems facing the earth’s unowned natural resources
  3. Tackling global security issues

In this spirit, the WEF researched and compiled The Future of Jobs Report 2018. This report is based upon the results of a survey that asked a large and diverse set of member companies about their projections for business growth and workforce composition, as well as plans for closing skills gaps, through 2022. The results of the survey provide an understanding of the potential for the technologies at the forefront of the Fourth Industrial Revolution to create new high-quality jobs, and vastly improve the quality of work for human employees.

To fully understand the WEF’s report and the information contained therein, it is important to have a working knowledge of what the Fourth Industrial Revolution is, and what technologies are set to shape the future of how we work and do business across the globe.

The Fourth Industrial Revolution (Industry 4.0)

Modern history includes four recognized periods of rapid technological, cultural, and socioeconomic growth, referred to as Industrial Revolutions. The first came about in the year 1760 and lasted until 1840, and was defined by the emergence of mechanization, which is the use of machines to replace human or animal labor. The time gap between each subsequent Revolution has been narrower, signaling the rapidity with which our society is advancing technologically.

Key Features of the Four Modern Industrial Revolutions
Key Features of the Four Modern Industrial Revolutions


Since about 2010, the world has been experiencing The Fourth Industrial Revolution, heretofore referred to as Industry 4.0 in this article. This exciting period of growth and innovation, like its three predecessors, is based upon the emergence and adoption of distinctive technologies. These are described in greater detail below.

Cyber-physical Systems Drive the Fourth Industrial Revolution

Referenced in the above graphic, Cyber-physical Systems (CPS) “integrate sensing, computation, control and networking into physical objects and infrastructure, connecting them to the Internet and to each other,” according to the National Science Foundation. These systems encompass technologies that Boston Consulting Group (BCG), a renowned  global management consulting firm, recognizes as driving Industry 4.0:

  • Big Data and Analytics: As data reporting progresses into data analytics, business decisions will be increasingly informed by valuable insights into what is possible, more so than reports about past performance.
  • Autonomous Robots: Robotic Process Automation (RPA) is designed to take up repetitive work tasks and free humans to tackle jobs that require higher-order cognitive skills.
  • Simulation: In the context of manufacturing, simulations allow for testing and optimization of machine programming in a virtual proving ground, prior to deployment into production.
  • Horizontal and Vertical System Integration: Data will be shareable between companies, across organizations and functions, and this evolution will drive a more cohesive way of doing business.
  • The Industrial Internet of Things: Devices large and small will be capable of generating and sharing insightful data, connected to companies and each other via the Industrial Internet of Things (IIoT).
  • Cybersecurity: The WEF asserts that cyber threats are expected to negatively impact business growth, but proactive efforts to shore up security around access and communications will help manage risk.
  • The Cloud: To enable fast data sharing free of geographical barriers, more and more machine data and functionality will be stored and served from the cloud.
  • Additive Manufacturing: 3D printing technology is but one method of additive manufacturing that will allow for low-cost, quick turnaround of small batch and custom products.
  • Augmented Reality: Augmented Reality (AR) is an upskilling technology that uses a device such as smart glasses to overlay information onto a physical object, changing how a human works with that object.

As global businesses invest in the above technologies in the hopes of obtaining a competitive advantage, enhancing their bottom lines, and securing their future in a high-tech world, savvy workforce planning will be crucial. By the same token, savvy workers will need to stay ahead of the learning curve if they want to remain employed in an ever-changing job market.

A Positive Outlook for Jobs, with a Catch

In The Future of Jobs Report 2018, the WEF examines how technological and socio-economic landscapes are set to affect human workers and global businesses through the year 2022. From November 2017 to July 2018, an online questionnaire was distributed to Chief Human Resources and Chief Executive Officers at over 300 companies representing 12 industry clusters and 20 economies across the globe. The instrument was designed to uncover these leaders’ plans and projections related to jobs and skills.

While the number of workers required for certain “redundant” job tasks is predicted to be reduced over the next four years, the findings in the WEF report suggest that increased demand for augmented roles – those requiring human partnership with the advanced technologies of Industry 4.0, as well as those that require distinctly “human” skills, can offset that reduction. Close to 50% of companies surveyed anticipate that automation will lead to workforce reduction by 2022, yet 38% of respondents plan to extend their workforce to new roles that enhance productivity, and more than 25% expect new roles to be created thanks to automation. This favorable forecast stands in contrast to reports published by other reputable research entities, which foretold of job loss associated with the rising use of Industry 4.0 technologies. Acknowledging this difference, WEF points out that the realization of the optimistic outlook derived from survey responses is heavily dependent upon anticipating and addressing skills gaps.

As the global business landscape shifts to keep up with the changes inherent in Industry 4.0, and jobs are augmented, phased out, or invented to support the new ways of doing business, it will be imperative for workers, employers, and government entities to embrace a sense of urgency around reskilling and upskilling. According to WEF’s report, “no less than 54% of all employees will require significant re- and upskilling.” Reskilling is the process of undergoing training to learn new skills for the purpose of performing a different job. Upskilling provides additional skills for the purpose of performing the same or a similar job. Workers will need to be agile and open to embracing new technology and business models, proactively learning what is required of their new or augmented job. Businesses must recognize human capital investment as an asset rather than a liability, and provide training opportunities for their workforces. Government entities need to enact policy that creates an enabling environment to assist with these endeavors. The figure below provides a snapshot of the types of skills that will grow and decline as Industry 4.0 progresses.

Types of skills that will grow and decline as Industry 4.0 progresses.
Types of skills that will grow and decline as Industry 4.0 progresses.

There is a high risk that the demand for such growing skills will outpace the availability of qualified talent. Survey respondents reported various strategies for aligning their workforces with their business’ strategic goals, which can be distilled into three major categories:

  1. Hire new permanent staff that possesses skills relevant to new technologies
  2. Completely automate the work tasks of concern
  3. Retrain existing employees

The WEF reports “the likelihood of hiring new permanent staff with relevant skills is near twice the likelihood of strategic redundancies of staff lagging behind in new skills adoption.” Nearly 25% of companies place the reskilling and upskilling responsibilities squarely upon the shoulders of the employee, expecting them to learn as their jobs change. Companies that do undertake reskilling and upskilling efforts largely focus on already highly-skilled and highly-valued employees. These findings support the assertion that global businesses are so far relying on the workforce at large to ready itself for the future. The WEF counsels that companies need to recognize the value of fostering a culture of lifelong learning. After all, if the narrowing gap between Industrial Revolutions of the past is any indication, Industry 5.0 may be upon us before we know it. Companies self-sufficient in enhancing their workforces just might find themselves at the forefront of the next exciting global thrust into our technological future.

Applications for Connected Components Workbench, and More.


This tutorial will go through the applications for the connected components workbench, differences between connected components workbench (CCW) and Logix 500/5000 and differences between CCW and FactoryTalk Machine Edition.

Brief History and Speculation

The programmable controller hardware has gotten smaller, faster, and cheaper throughout the years. With the rise on these cheaper controllers, it is speculated that Rockwell had to keep up with the competition as not to lose market share. Even loyal Rockwell consumers have a hard time swallowing the cost difference between a CompactLogix and an Automation Direct click PLC. And those who could bare the hardware difference would be lost at the up-front software price that Rockwell requires. Rockwell needed a leading-edge controller with a free programming software base.

Benefits of Connected Components Platform

  • Low Hardware Cost
  • Embedded I/O
  • Ability to Add I/O with Local Chassis Side Cars
  • Ethernet Connectivity
  • Safety & Motion Support
  • One programming software package for PLC, HMI, and VFD

Limitations of Connected Components Platform

  • Only small to medium applications
  • I/O Count Limitations
  • Limited instruction set and memory compared to higher level controllers
  • Not capable of remote Ethernet racks
  • Not currently capable of gigabit ethernet
  • No current simulator or emulator

Application Notes

The main application for this platform is machine controls. This platform is limited by I/O count so the designer needs to take that into consideration when selecting a platform. Even though Micro800 family would support more than 100 I/O points, it is the recommendation of this expert that if there are more than 75 I/O points to control – higher level controllers should be heavily considered. A designer will quickly run out of memory between 35 and 75 I/O points.

Not so All in One

Through the process of completing this tutorial, it was found out that the Micro810 controller does not support a connected PanelView. The designer should verify all product compatibility and applications before purchasing.

Connected Components Workbench Application

Installing CCW is beyond the scope of this tutorial. It does take some searching around on Rockwell’s website to find the correct download.

After the program gets installed and launched it will look like the picture in Figure 1.

CCW Main Window
Figure 1: CCW Main Window

This program user interface may look familiar because it is based on Visual Studio. Visual Studio is a very popular Integrated Development Environment (IDE) for .NET framework applications.

Click “New Project” and give the project a name, how about “stack light”.

The Add Device dialog box will pop up as seen in Figure 2.

Add Devices Dialog Box
Figure 2: Add Devices Dialog Box

Using this dialog box, the designer can select a controller and any other peripheral devices like HMI station, and VFD. The user cannot add multiple devices at once but they can get back to this screen using the add device button on the project organizer.

COOL PROGRAM FEATURE – notice the upper right corner of the Add Device dialog box, there is a “Select existing device” hyperlink. This is useful when the user is creating a new project for existing hardware. If the device is online, the user can use this to select the exact online device.

For this tutorial, select the Micro810 > 2080-LC10-12QBB. This controller has the following embedded I/O: eight 24-Volt digital inputs, four 0 to 10 Volt analog outputs, and 4 source outputs. Select the latest version and add the controller to the project.

Base CCW Project
Figure 3: Base CCW Project
Project Organizer
Figure 4: Project Organizer

One of the biggest changes to this platform is the not so subtle differences between the user interface. This, again, is because connected components workbench is based on visual studio.

Building – Kind of like a “verify”

Before the user can download the program into the controller, it must be “built”. Without getting too technical, this basically means that the code is compiled to binary format in which the controller can run. The higher-level controllers may also perform this step as well as part of the verify and/or download process. If there are errors in the user’s program they will pop up at the bottom of the application during this build step. To build the project right click on the controller and click “build”.

Output Window

Output Window
Figure 5: Output Window

The output window displays the output of the compiler during the build process. This is very similar to the same window in Logix 5000 after a verify has been completed by the user. If there are any errors inside of the project they will be visible in this window.


Micro800 family PLC supports a few methods of programming,  including ladder, structured text, and function block diagrams. For this tutorial, ladder logic will be the focus point.

Ladder Programming

To add a ladder program to the project, right click on Programs inside of the controller organizer, then click add > new ladder diagram.

Local and Controller Variables

Like higher level controllers, there are still two scopes for controller tags: Controller and Program (Local Variables). This can make programming and scaling these applications easier by enabling users to design code through a controller to local tag mapping. For example, if the user had a set routine for a VFD that was consistent across instances they could make copies of the ladder diagram and utilize local variables so the user doesn’t have to rewrite a bunch of code.

Starting to Edit the Ladder Diagram

To edit the diagram, double-click it in the project organizer.


To get to the toolbox it is usually a tab on the far right side of the main window. If it is not visible click on View > Toolbox, or hit Ctrl + Alt + x on the keyboard. While editing the ladder diagram, the toolbox will look like Figure 6 below.

Toolbox Window
Figure 6: Toolbox Window

To add elements to the ladder diagram, simply drag them over from the toolbox. Each time the user drags over an element, there will be a popup to select the tag to use for the element. For example, if the user drags over a direct contact the popup will ask for a bool to attach to the instruction. The user can always add tags of the same data type in the same window.

User Defined Function Blocks – Similar to Add-on Instructions

This platform also supports user-defined function blocks – which can be programmed using any of the supported methods. This is beyond the scope of the tutorial, but the point is that this kind of programming is supported by these controllers.

Ladder Programming

Ladder Diagram
Figure 7: Ladder Diagram

In Figure 7 shown above, there is a ladder diagram to control a simple traffic light. The diagram looks very similar to Logix 500 or 5000, there are some subtle differences.


As seen in Figure 7 above, the input data to some of the timer instructions looks rather unusual. It is a constant TIME DATATYPE, and it can be of variant time basis; “T#5S” means 5 seconds. For another example, “T#1H450MS” is one hour and four hundred fifty milliseconds.

Instruction Block

In programming Figure 7 above, the instruction block was used for the timers. The instruction block can be found in the toolbox. Once an instance of the instruction block is dragged over into the ladder diagram, the specific instruction is specified by the user.

PanelView Added To the Project
Figure 8: PanelView Added To the Project

Adding A PanelView To the Project

Adding a panel view to the project is much like adding a controller or drive. Click the add device on the upper left-hand corner of the Project Organizer window and follow the onscreen prompts.

New Toolbox

The toolbox window is different when working in the PanelView portion of the project. Instead of ladder instructions inside of the toolbox the user will see HMI objects. There are a lot of things that are similar between FactoryTalk ME like multi-state indicators, momentary push buttons just for a couple examples. If the user is familiar with FT ME then he/she will understand how these objects work. However, the binding is limited in connected components workbench.

Binding Objects to Tags
Figure 9: Binding Objects to Tags

Connected Components Bindings

All the bindings for the tags in connected components workbench are done in the properties of each object. Only the properties exposed to this bindings are available. However, there is something called “User Defined Object Library”. It is assumed that this is a template engine and users can create their own HMI objects for use in their applications.

HMI Tag Database

When binding HMI objects to controller tags, there is an intermediate tag database that has to be in place. This connects the controller tags with the HMI tags.

HMI Tag Database Editor
Figure 10: HMI Tag Database Editor

The HMI Tag editor is seen above in Figure 10. As seen in the Figure, you can bind the HMI tags to a controller listed in the project and specify an address to the controller (if you have a PLC other than a Micro810). The user can then use this database to animate the HMI objects just like they would in a FT ME application.

Try before you buy!

With connected components workbench software being free, there is no reason why the user should not download it and work with it before they decide to invest money in the platform for a machine control application.


There are a lot of technicalities to this platform and it is less user-friendly than the specialized tools that the ControlLogix and FactoryTalk ME platform brings. That ease of use and power comes at a big cost though.

In this tutorial touched on a lot of things about connected components workbench platform, and with the knowledge that we went over the user would have a good base to get started on their own project.

The Internet of Things and The Industrial Horizon

Over recent years, there has been an awful lot of hype over the Internet of Things (IoT). We’re told it will transform our lives, bring about the fourth industrial revolution and possibly even improve our work lives and living standards. Less is said about how this will be achieved, or even what the IoT actually is, and why it will benefit our lives.

In an age when technology is moving in leaps and bounds, it is important to stay informed about the changes happening to our world. Here we sift the fact from the fiction in order to understand how business can expect to change as the IoT ushers in a new era for us all.

The IoT explained, simply

In the most basic terms, the IoT is a network of products, both for commercial and consumer use, which are connected to each other and various services or businesses through the internet. These items have chips with sensors inserted into them which are programmed to record and monitor various functions and use. This information is then communicated within a closed network or across multiple networks to provide a better service or additional insights for the user of the item while also capturing data for manufacturers to aid further development, innovation of new products and in some cases automates processes.

There are multiple, almost endless applications for this connected technology. Take for example a smart meter. It records the energy consumed at a property and displays that information for the property owner or tenant, as well as relaying the information back to the energy supplier. This real-time display of energy consumption for both the consumer and the supplier is enabling people to better understand their energy needs and wastage. This knowledge, in turn, enables real-time monitoring of energy consumption across the grid and the creation of plans and services in response to the data that is automatically being gathered for them.

The same principle is used for any number or kind of item, from smart concrete that gives updates on the condition of the material via a chip and sensor inserted into it to industrial robots that have sensors to alert owners of their maintenance needs before parts wear out and something goes wrong.

Where the IoT could take us

Understandably, the expectations for the IoT are high. By connecting all manner of devices to each other and businesses more data can be captured, analyzed, and acted upon.  In some cases, humans won’t even need to be present to put the machines to work; remote operation of smart, connected devices will be the new way of working – or so we are told.

It is conceivable that machines and devices of all kinds could be programmed to work together to produce products and even order new raw materials when the stock is running low so they can carry on with their work uninterrupted. Workforces could be vastly reduced and processes not only automated but managed remotely. The massive amounts of data accumulated by all of the devices connected to the IoT will provide even more room for opportunity, understanding, and insight to manufacturing and other work.

The integration of radio-frequency identification (RFID) and the IoT are expected to transform supply chains too. RFID means specific items can be tracked and through the IoT real-time updates on delivery, tracking and inventory systems could be possible. It would also cut down on counterfeit goods, help better management for the expiration of perishable items, and allow for remote monitoring of conditions during delivery, such as temperature, moisture levels, and even potential contaminants.

Other recent technological advancements such as Blockchain and tangle technology are being combined with IoT to bring about a marketplace without transaction fees and even machine to machine transactions that allow equipment to purchase and manage small transactions with cryptocurrencies on the behalf of their owners, or even autonomously.

IoT hurdles yet to be cleared

All this embedded technology negating the need for human involvement has its downsides though. Some of the expectations for the IoT are, as yet, unrealizable for a number of different reasons ranging from security issues to connectivity problems and even the fact that maybe humans do want to decide if they’ll order more milk instead of leaving the decision up to their fridge.

The recent boom in the number of smart, connected devices that make up the IoT has given new life to an issue that has been around since the dawn of personal computers: different programming and coding languages can stop devices ‘talking’ to each other, or make it difficult. In a similar way to Mac and Microsoft being incompatible, many of the items in the IoT have difficulty communicating, largely because of different programming languages and a lack of accepted standards that could make that possible.

The Institute of Electrical and Electronics Standards Association is working on a long list of standards that address some of the communication issues for machines and devices on the IoT, but as this is being done, the Internet Society has noted that the need for other interoperability standards is coming to light. Without the ability to communicate with a range of different objects and machines in a common language, the IoT suddenly seems more like the Internet of Some Things but Not Others.

Corporate privacy and security also become much harder to manage with the IoT. Data breaches, cyber attacks, and compromised systems are a real threat, costing the American economy somewhere around $57 and $109 billion in 2016. While the IoT could streamline business processes, give transparency to supply chains, and allow for a decrease in manufacturing workforces and running costs, it could also open them up to digital threats of many kinds.

Along with that is the issue of the massive amounts of data being collected and how that data is used. While a company may be making the best possible effort to store and protect the data collected from their own machines and that of their products in use by consumers, there is still the potential for it to fall into the wrong hands, or for unscrupulous organizations to use it for less than beneficial purposes. While there aren’t many countries with specific laws governing IoT data collection and use of that data, the Federal Trade Commission has compiled some best practices for protecting data gathered from users for companies who create IoT-connected products.  Aside from that, however, laws pertaining to security and privacy of information available on the IoT is governed by laws written before the IoT was ever imagined.

The sheer size and complexity of the IoT can also be a cause for concern. Any bug, virus or glitch could have serious consequences for a business relying solely on embedded technology to get things done. A simple power failure could stop production and lose thousands if not millions for a manufacturer reliant on connected technology and robots.

The IoT is definitely an exciting opportunity for innovation in businesses of all kinds. The applications seem near endless, the convenience and ability to automate everyday processes almost infinitely. The issues with connectivity, interoperability, security, and complexity, however, are real and will take thought and cooperation from all players to solve before the big dreams can be realized. Fortunately, there are many organizations and technology firms working to this end, and one way or another, the IoT in some form will most certainly change the future for all of us.

Best Practices for PLC to PLC communication


This tutorial will cover best practices for PLC to PLC communication. It will cover what to do with new AB ConrolLogix processors as well as old legacy communication practices. In some cases, they are the same.

Only reads, never writes!

To state this again; only reads! NEVER WRITES. It is never best practice to write to another PLC. Firstly, you don’t know where data is coming from. Secondly, if the data points are not well documented the user may use these bits in other parts of the logic – then they must troubleshoot why it is not working correctly. These issues are rather hard to troubleshoot as well.

Best Practice – Produced Consumed Tags

By far, the best practice for new ControlLogix and CompactLogix processor to processor communication is by setting up produced consumed tags. This is the fastest, most reliable way to get data back and forth to different processors. This practice also does not count against the users supported ethernet connection count. One of the pitfalls with this structure is not using the built-in diagnostics tools to detect a communication drop. This will cover how to add those data structures inside of the structure. Another one of the downsides of using these is that they cannot be added, deleted, or changed online.

Produce tags even when there are no consumers, and of each supported datatype. This takes a little forethought, but there will come a time when the user will wish that they set these up. So, how does the user do that?

Setting up a System Interface UTD

It is good practice to set up ONE UDT to take care of all the produced consumed tags. This ensures that all the communication mapping is the same data types. For this tutorial, a UDT called System Interface was created, the image of this datatype can be seen in Figure 1 below.

System Interface UDT
Figure 1: System Interface UDT

System Interface consists of DINTS, INTS, REALS, and a CONNECTION_STATUS data types. All the data a user would need to produce and more. There is one important thing about this UDT; it is 500 bytes in size. The maximum size a produced tag can be is 500 bytes. Also, the UDT has a CONNECTION_STATUS data type tag in it. This is what monitors the connection(s) for the produced consumed tags.

Setting a Tag to Produced
Figure 2: Setting a Tag to Produced

In figure 2 above, a produced tag has been set up with the name “J4050_DATA”. This tag is of the System Interface data type so it has the same structure as seen in Figure 1. To set the tag as a produced tag, right-click the tag in the controller tag click “Edit Tag Properties”. The tag properties window will pop up, and select Produced in the Type selection box. Once that is selected, click the Connection button to the right of the Type selection box. The produced tag connection window will pop up.

Connection Tab

In this tab, the user can specify the maximum consumers. It is always good practice to specify more consumers that are needed.

Status Tab

This tab, as seen in Figure 3, shows that the connection status is included with the produced tag. If it was not included, the radio button would not be selected.

The connection status object is monitored ONLY on the consumer side. The producing PLC doesn’t care about the connection status. If the user wants to set up alarms on these tags, they must do it on the consuming side PLC.

Produced Tag Connection Status Tab
Figure 3: Produced Tag Connection Status Tab

That’s is for the produced tag, now the program can be downloaded to the PLC.

Consumed Tag

Add the PLC into the I/O Tree

This is the most difficult situation for the procedure because every situation is different. Sometimes, the user can go online with the consumer and add the producer PLC into the I/O tree using the discover module tools. Other times, the user must add it offline and needs to pay attention to the configuration to avoid a second, third, or fourth download.

In this tutorial, it will be added to an offline state. The producing PLC is in the same chassis as the consuming PLC, just one slot to the left. In Figure 4 below, the producing plc can be seen in slot 2 (called J4050_1) and the consuming PLC can be seen in slot 3 (called J4050_2).

Adding Producing PLC into Consuming PLC I/O Tree
Figure 4: Adding Producing PLC into Consuming PLC I/O Tree

The consumed tag is also straightforward. First, the System Interface user-defined data type should be transferred over via import/export. Then, a tag should be created of the same datatype as seen in Figure 5.

Consumed Tag
Figure 5: Consumed Tag

After the tag is created, right click on the tag and click on “Edit Tag Properties”.

Consumed Tag Configuration
Figure 6: Consumed Tag Configuration

Figure 6 shows the consumed tag configuration. Again, the tag was configured by first creating the tag of the System Interface data type, then right clicking and editing the properties of the tag. The connection configuration can also be shown in Figure 6. The producer was selected from the dropdown list. If the user doesn’t see anything here, then they haven’t added it to their I/O tree in the previous steps. The “Remote Data” field is an important one. This is the tag in the producing processor that the user is trying to read. This is manually typed in, so when the user does this make sure there are no mistakes. If there are mistakes, the connection status should reflect that the tags are not connected.

After this configuration is completed, the program can be saved and downloaded into the consumer PLC.

Legacy PLC to PLC communication


The user is a contractor installing a new ControlLogix or CompactLogix PLC and he/she needs to grab data out of an old SLC, MicroLogix, or PLC-5 PLC. He also needs data from the new PLC on the other side (in the older controllers). Let’s go over the best way to do this.

SLC data Into ControlLogix

This will have to be done via message instructions. This is straightforward, and in the scope of this tutorial, it will not be covered. The high-level view is, the ControlLogix processor reads from the SLC PLC and into a new tag block. The tag block must be of the same data type that is being read from the SLC PLC.

ControlLogix Into SLC PLC

It would be easy to set up a write message block to the SLC PLC. However, that would break our self-induced best practice rule of only reading to a PLC; so, let’s not do that.

The goal for this is to read ControlLogix data into an SLC PLC. Now, thinking of the tag structure in an SLC, it is curious how this could be done. SLC PLC is flat data files while ControlLogix tag structure is user-defined ASCII tags. The answer is SLC mapping.

In Logix 5000 main menu, under “Logic” there is a selection for “Map PLC/SLC Messages”. This can be seen in Figure 7 below:

Legacy Mapping Menu Item
Figure 7: Legacy Mapping Menu Item
SLC Mapping Window
Figure 8: SLC Mapping Window

In Figure 8 above, the SLC mapping window can be seen. There is an entry in the table specifying a file number and name. The file number is what the SLC PLC will point to when reading, and the name is the data source that the file will be mapped to. In Figure 8, in the background, the source tag called OLD_PLC_INT can be seen. It is an array of twenty integers. So, in this case, the user would point his/her SLC read message instruction to file “N1”. “N” because the source data type is an integer, and “1” because it is mapped in file one. If the source datatype was a real, then the user would point to “F1”.


Sometimes, best practices cannot be followed, and you must write to a PLC. Maybe, the PLC doesn’t have enough free memory to support message instruction. Maybe the PLC can be taken offline for a download, sometimes that’s required to get a message instruction to work. In any case, if the user must write to a PLC it is imperative that the user documents where the data came from.

Blockchain: What is it and where will it take us?

For many, blockchain is synonymous with Bitcoin and other cryptocurrencies. However, although blockchain technology is the basis for cryptocurrencies, it has many more applications than digital currency. Some have stated that blockchain has created the’ internet of value’ while others tout it as the one thing that will change the way we transact with each other forever. While these statements may be true, many people today don’t fully grasp what it is or how it works. Below we shed some light on blockchain technology.

Blockchain Explained

Blockchain - Bitcoin

Blockchain was invented by a person, or group of people, known as Satoshi Nakamoto, who released Bitcoin, the first cryptocurrency, in 2008 and described it as the first purely peer-to-peer version of electronic cash. Bitcoin is reliant upon blockchain technology which, when broken down to its component parts – distributed files, cryptography, and openness – is nothing new. What is new is the use of these parts in unison.

Essentially, blockchain is a computer file used for storing data, similar to any other file. What makes blockchain different, however, is that the files or blocks are stored simultaneously on multiple computers across a decentralized network and regularly reconciled by automatically checking itself around every ten minutes. This ensures that no individual or organization has control over the content of the file. Editing the information in a file requires agreement between every single computer storing it; which brings into play the second element of blockchain tech – cryptography.

Cryptography ensures that each piece, or block, of content within the blockchain is encrypted. To change or sometimes read the information contained in the chain of data you need the code that relates to a specific block, or the whole chain. Without the right code, any attempt to edit or read the information contained in the blocks will be rejected. If the right key is provided and verified throughout the network, access to read or to make an edit is given and a timestamp allocated to the file. This then creates a new block of data for the chain. This new link is added and then distributed across the network, so each computer holding blockchain secured data will update to hold exactly the same information.

The final element, openness, means that anyone with the right permissions is able to view or edit the data within the chain. This openness also ensures that each computer in the network storing the blockchain information is able to monitor the validity of the requests to read, change or update data and as such make sure that the necessary protocols are being met and algorithms answered correctly.

Different Types of Blockchain

Three different types of blockchains have emerged. Public or permission-less blockchains (like Bitcoin or Ethereum) mean that anyone can create a block of data by buying, selling, or giving their currency to someone else and anyone within the network can read and validate these changes but no single person has control.

Public-permission blockchains are arranged within a group or string of organizations. Editing keys are available only to those in the group while the public has permission to read the information contained in the chain. These are being implemented to help prove the provenance of products and authenticate various claims such as organically grown or sustainable fishing claims.

Finally, private permission blockchains are starting to emerge. The data contained within them is not for public viewing and is only accessible by those within the private group; however, each change to the blockchain is recorded permanently. Private permission blockchains are being used in situations such as a group of small businesses that regularly interact and need to maintain an accurate and permanent record of all these exchanges.

Emerging uses of blockchain technology

Financial Blockchain Application
Financial Blockchain Application

While cryptocurrency is most definitely the star of blockchain technology, it is by no means the only actor in the play. As blockchain essentially records each transaction of data, finance is certainly the most obvious application, and also the oldest application for the technology. However, ten years on from its launch, several other areas of industry, governmental process and business transactions are showing promise in terms of making use and streamlining how we work. Below we’ll take a look at three of the most prominent areas adopting blockchain technology.


Electronic Hospital Records
Electronic Hospital Records

Recent changes in America’s legislation for both public and private the health care systems have required all healthcare professionals to demonstrate meaningful use of electronic health records. Furthermore, the Affordable Care Act gave $28 million of federal money to implement electronic health records. Whilst this has pushed forward the move towards digitizing patients’ records no conventions have been agreed for sharing the data between health providers.

Blockchain provides an opportunity to correct this oversight while also ensuring the integrity of patients’ medical histories across institutions. MedRec has designed one such system for creating family history medical records based on Ethereum’s blockchain technology which allows records to be passed from generation to generation. The metadata is encrypted while still enabling secure access to patients across healthcare providers.

The Department of Health and Human Services have taken note of the innovation and in 2016 launched their own Blockchain Challenge with the goal of investigating the relationship between Blockchain technology, its use in Health IT and/or health-related research and how it may be used to advance industry interoperability. Entries covered a range of health-related areas from redesigning health record systems to renovating payment systems and addressing the claims process.

Land Registrations

Land Registration
Land Registration

When disaster strikes or people are forced to evacuate homes it can become difficult to prove who owns a particular piece of land once things begin to return to normal. Even in times of relative peace and calm, to purchase property buyers must locate the title and have the lawful owner sign it over. In some cases, flawed paperwork, forgeries, and defects in documentation can make this seemingly simple process almost impossible.

As many countries are already moving towards digitizing land records if they have not already done so, blockchain technology is being considered by many as the answer to creating accurate, incorruptible records for land ownership. Vermont, United States issued the first property deed using blockchain technology in April of this year. Various departments of the US government are recognizing that blockchain technology could provide solutions to their own issues around streamlining processes, audit burden and data security and integrity.

Smart Contracts

Another area ripe for the adoption of blockchain technology is smart contracts that negate the need for a third party to hold funds in escrow or administer the agreement.

The idea is for the contract between two parties to be written as code, including all the clauses. The ‘if this, then that’ programming allows for all eventualities that have been covered in the agreement. The parties involved in the contract can remain anonymous but the contract is placed in a public permission-less blockchain which in effect increases its security. The contract is programmed with specific trigger actions, such as the delivery of a product, the completion of a project or reaching a specified date, upon which the agreed funds are released or the contract completes itself in some other way according to the coded terms. Regulators would be able to view the blockchain information to understand the activity of the contract whilst upholding the privacy of each participant in the agreement.

Smart contacts can be used by insurance companies to interact directly with customers and without the need to engage a financial service provider, health systems giving more control of data to patients and allowing them to interact directly with researchers, governments to make voting more secure and even businesses for paying their staff each month or fortnight.

Taking Control of Personal Privacy

Recent news about how personal data has been used to manipulate peoples’ political or other views, or simply nudge them towards purchasing a particular product has been prominent in the news over recent months. Blockchain is one way in which individuals may be able to take back control of the swathes of data they create each day by browsing the internet and using social media.

Personal information that is regularly used, including credit card details, first and last names, birthdates, and our unique identifying answers to personal questions, could conceivably be stored within a blockchain. Rather than giving away our sensitive information on each platform and providing the basis for an accumulation of other telling data about our lives, needs, aspirations and wants, this information could be stored in a blockchain and combined with biometric security features similar to that already used by smartphones and tablets. It would then be feasible to enjoy considerably more control over what information we share with whom and do away with passwords too.

The problem of having this information about ourselves being duplicated multiple times and stored in a public space is being worked upon by MIT’s Enigma project who are using the latest privacy technologies to ensure private information is accessible only to those who have the keys to access it and also reduce the power and time taken to process and store vast amounts of information. The aim is to create ‘secret contracts’ that keep information private from the computers storing it whilst still enabling them to process it.

The multiple applications for blockchain technology, the continued growth of cryptocurrencies and strong proof of work that Bitcoin has provided, heralds a new future for tracking and recording peer-to-peer transactions, be they public or private, across the world.

All about Add-On Instructions


Add-on instructions are generally a small, tested, pocket of code that the user wants to reuse multiple times in program(s). This tutorial will go into best practices of add-on instructions, how the add-on instructions work, and it will go through creating a custom add-on instruction.

Best Practices

There are some best practices for add-on instructions, which are listed below.

Testing: Add-on instructions cannot be changed online so make sure whatever instruction the program is using is tested and will work in a production environment. Again, add-on instructions cannot be changed online.

Single Level: It is good practice to write only a single level of add-on instructions, meaning that you should not have add-on instructions within add-on instructions, even though it is possible.

Naming: It is good practice to name add-on instructions with a consistent prefix. For example, “ao_IO_DigitalOutput” would be an add-on instruction for a digital output. When add-on instructions have a consistent prefix, it is easy to distinguish them against other data types. For example, there also might be a User Defined Datatype (UDT) named “IO_DigitalOutput”.

Building your First Add-on Instruction – Digital Output

Creating New Add-On Instruction
Figure 1: Creating New Add-On Instruction

To add a new add-on instruction into the program, right click on the add-on instruction in the project explorer, and click “New Add-On”. A popup will come up asking to specify the name of the add-on instruction. For this tutorial, it will be named “ao_IO_DigitalOutput”.

After the name is specified, click ok, then the “Add-On Instruction Definition” window will come up as seen in Figure 2 below.

Add-On Instruction Definition Window
Figure 2: Add-On Instruction Definition Window

General Tab

This holds a lot of “metadata” for the add-on instruction like name and description. The add-on instruction can also be configured for a different type of program, for example, a function block diagram. There is also revision information for the add-on instructions.

Parameters Tab

The parameter tab is the most important window when creating an add-on instruction. In this window, the input and output values are created for the add-on instruction. In Figure 3 below, “EnableIn” and “EnableOut” tag were already in there, and the other parameters were created.

Parameters Tab
Figure 3: Parameters Tab

Each of the parameters will be an attribute inside of the add-on instruction data structure. Each instance of the add-on instruction will have the exact same data structure. To inspect the columns of the parameters:

Name – The name of the parameter

Data Type – The data type for the parameter. Depending on what the usage is, the types will be limited. Input and output parameters have to be basic data types. InOut parameters can be virtually any valid data type.

Alias For – This is for specifying an alias for the parameter. The alias has to be a local tag within the add-on instruction. Local tags will be discussed later.

Default – This is for specifying a default value that will be set for each instance creation. This is especially useful if you have a lot of instances. If a parameter is usually “10” other than a couple of instances, the user can specify the default and only change the instances that are unlike the rest. It is not very useful if the parameter is required because the user will have to supply a tag other than a constant value.

Style – The style that the user wants this data in.

Required (Req) – If checked, a tag is required to specify this parameter in each instance. If not checked, the tag will be a constant value in the instance. If a parameter is not required, the user can still access the tag in preceding or succeeding logic. Basically, the unrequired parameter is still inside the instance data structure so it can be read or written at any point in the logic.

Visible (Vis) – If checked, the parameter will be visible in the add-on instruction block. If left unchecked, it will be hidden. If the user doesn’t absolutely need the parameter to be visible in the add-on instruction, it is best practice to have it invisible (unchecked). If a parameter is an output parameter and is a Boolean type, the parameter will show up as a little output flag next to the add-on instruction instance.

Description – The description of the parameter. These descriptions as visible inside the add-on instruction logic. It is very good practice to put text in for descriptions unless the tag name is all that is needed to describe the parameter; it rarely is.

External Access – This is almost always set automatically through the set of other attributes of the parameter. Selection options are Read Only and Read / Write.

Usage – There are three types of usage parameters: input, output, and InOut

Input – Must be basic data types (BOOL, INT, or DINT). Input parameters can be read-only or read/write and they can be accessed from the outside of each instruction instance. Input parameters are how the rest of the program interacts with the instances of the add-on instructions.

Output – Must be basic data types (BOOL, INT, or DINT). Output parameters are Read Only. They can be accessed outside the add-on instruction but can only be written inside of the add-on instruction. Output parameters are usually what the add-on instruction is calculating, or logically controlling.

InOut -These are tag references ONLY. They are required to be required meaning an external tag must be referenced for these parameters. They can be of any valid data type. They can be read and written only internally in the add-on instruction. The InOut parameters do not show up within the add-on instruction instance data structure. This is an important concept to grasp because most people do not use this usage type correctly.

For our first add-on instruction, create the add-on instruction as shown in figure 3.

Local Tags Tab

Local Tags Tab
Figure 4: Local Tags Tab

The local tags tab as seen in Figure 4 is to specify local tags for the add-on instruction, and they can be of any valid data type. These tags are only available to be accessed inside the add-on instruction logic. These local tags are great for temporary data registers the add-on needs to complete the logic. For example, if an add-on instruction needs a one-shot instruction, which would be a great local tag application.

For our first add-on instruction, we will just be using the input and output parameters in our logic, so no need for local tags this time.

Scan Modes Tab
Figure 5: Scan Modes Tab

Figure 5 above shows the Scan Modes tab, and there are three “modes” that each add-on instance can be in. Each mode can have a separate routine and has access to all the local tags and parameters inside the add-on instruction. The prescan and post-scan routines are outside the scope of this tutorial and are rarely used. The EnableInFalse routines are commonly used when the add-on instruction instance has logic in front of it. This is because the some of the data may need to be cleared when the logic in front of the instruction yields false.

For our instruction, we will not have any of these supplemental mode routines.

Signature Tab

Signature Tab
Figure 6: Signature Tab

This tab is used to generate a signature for the add-on instruction. The ID is a checksum of the entire instruction, so if anything changes down the road the id will change with it. It also keeps a signature history of the add-on instruction for additional revision control.

Change History Tab

This simply tracks the changes detected in the add-on instruction. No picture needed.

Help Tab

Help Tab
Figure 7: Help Tab

In the help tab seen above in Figure 7, the user can specify additional text to help his or her predecessors. The help file will be visible if the instruction is highlighted in the program and the user presses the F1 key.

Add on Routine

Add-On Routine
Figure 8: Add-On Routine

This is a simple but useful routine. When the automatic command comes in if the output is not in manual override by the force bits, the output will come on. When the force INT is a value of 1, the output is forced off, and when the output is 2 the output is forced on.

If you’re following along with your first add-on instruction, make sure your logic looks like the one seen in Figure 8.


Instances of the New Add-On Instruction
Figure 9: Instances of the New Add-On Instruction

Seen in Figure 9 above, in the main routine, two instances of the new add-on instruction were created. As seen in the figure, Output1. Cmd is True and the add-on instruction Output parameter is also True. If the force parameter is changed from zero to 1, Output1.output is off, and so on. It is working exactly as expected.

Instance troubleshooting

To see what is happening inside the add-on instruction. Highlight the instruction, right click on it, and click “Open Instruction Logic” as seen in Figure 10 below.

 Instance Viewing
Figure 10: Instance Viewing

Once that is clicked the user is taken inside of the instance and the user can view what is taking place. The background is gray meaning that the user cannot edit the rungs online as seen in figure 11 below.

Instance Viewing Logic
Figure 11: Instance Viewing Logic


Hopefully, this gives you a nice jump on creating some of your own add-on instructions and see the value and power in reusable code. I encourage you to start small and build small building blocks – eventually, they will turn into a beautiful “house” of code. Make sure you test along the way.


Differences Between PLC Programming Languages

There are a few different methods of programming for a Control Logix processor, listed here in the order of most common to least common:

  • Ladder Logic (Most common, Preferred)
  • Function Block Diagram
  • Sequential Function Charts
  • Structured Text

In this tutorial, I will go over the different methods and describe how they work. Hopefully, the differences between programming methods will be clear.

First Things First

Each method has its own instruction set. Some are very similar between methods but some instructions are only available in one method versus the other. For example, the PIDE instruction, which is called “Enhanced PID” is only available inside the function block diagram method.

In a Control Logix PLC, combinations CAN be used in conjunction with each other. The different routines can access the same tags regardless of method, meaning that a function block diagram can access the same tags as a ladder routine. It is good practice to use one method of programming, but if a different instruction is only available in the other method and it needs to be used it is possible to have two or three methods.


For this tutorial, a routine was created in each method that does the exact same thing. The routine takes in a “Trigger” tag and outputs a “Output” tag. The “Output” tag turns on for ten seconds, then off for ten seconds, then repeats until the “Trigger” tag it cleared.


Ladder Logic
Figure 1: Ladder Logic

With ladder logic, you place different instructions on what are called “rungs”. Each rung has input instructions and output instructions. The input instructions are on the left and the output instructions are on the right. It simulates a “circuit”, with the left side being the power rail and the right side being the common rail. If the simulated “power” gets through the input instructions, it executes the output instructions. These routines are scanned from top to bottom, left to right. To elaborate, rung 1 will complete in entirety before rung 2 input instructions are evaluated.

In the case of the example above, the input instructions are normally open (XIC) and normally closed contacts (XIO). XIC and XIO are the formal names of the instructions. The output instructions are TON and OTE instructions. The TON is a “Timer On Delay” instruction and the OTE is an “Output Energize” instruction. As described above, when the trigger tag (LAD_TRIGGER in this case), the output will come on for ten seconds then off for ten seconds. The output tag, in this case, is “LAD_OUTPUT”.

Ladder Logic is the foundation of programmable logic controllers, and the most widely used. People argue that anything can be programmed using this method and the other methods are not needed, which I support, although I will go over the other methods.

Ladder Logic Pros:

  • Well-organized code on rungs
  • Supports online changes well
  • Comments are visible and organized
  • Instructions take up little memory.

Ladder Logic Cons:

  • Some process control instructions are not available
  • Scanning issues can be created
  • Difficult for motion programming
  • Difficult for batch programming

Function Block Diagrams

Function Block Diagram
Figure 2: Function Block Diagram

The above image shows an example of the exact same program written in a function block diagram. With the function block diagrams, the user places instructions on a “sheet”, and one routine can have multiple sheets. Input instructions and output instructions can be anywhere on the sheet. To connect the input and output instructions the user places wires between terminals. The sheet is scanned “continuously”, meaning there is no discernable start and end, but that sheet 1 is scanned before sheet 2 and so on. The user must keep this in mind when using this method.

In the case of Figure 2, the input instructions Input Reference (IREF), and the output instructions are TONR (Timer On with Reset) instructions. TONR instructions are available in Function Block and Structured Text, but not available within the Ladder or Sequential Function Chart.

This is a good method to program motion controls. The user can setup ramps and s-curves for a VFD or servo motor.

Function Block Pros:

  • Good for motion controls
  • Good for low level signal processing
  • Visual method may be easier for some users
  • Wide instruction set

Function Block Cons:

  • Code gets disorganized
  • Sheets stack up and it gets tough to debug
  • Instructions take up more memory than in Ladder

Structured Text

Structured Text
Figure 3: Structured Text

This image shows the exact same program but written in structured text. The user writes every line of the routine(s) by hand. The keywords and instructions are blue, and the tags are colored red. The routine is scanned from top to bottom, left to right. The instruction set is similar to the instruction set for the function block diagram. As seen in Figure 3, the TONR instruction is used. Like function block, tags can be “wired” up to instructions by setting them equal. For example, in the above image, “ST_OFF_TIMER.TimerEnable” is wired to “ST_ON_TIMER.DN”. When this executes, that line of code will start “ST_OFF_TIMER” after “ST_ON_TIMER” is complete.

Structured text is only useful for a large amount of calculations. It is not recommended for anything other than setting tags to values. Sometimes, when editing online, the edits do not take and the user must download again.

If the user is familiar with programming languages such as Matlab, then they should be comfortable within this environment. Like in Matlab, the “:=” is to set “something” equal to “something”, and the plain equal sign is to compare a variable equal to a different variable or a constant.

Structured Text Pros:

  • Code is organized
  • User controls operations
  • Good for large calculations

Structured Text Cons:

  • Very abstract
  • Difficult syntax
  • Hard to debug
  • Hard to edit online

Sequential Function Charts (SFC)

Sequential Function Chart
Figure 4: Sequential Function Chart

The next figure shows an example of a sequential function chart. This is the same concept as a traditional flow chart. There are conditional objects (in this case called transitions), and action objects (in this case called steps). The transition objects appear like so:

Transition Objects
Transition Objects

and the steps like so:



The user places these at any point on the chart, and the chart size is variable unlike the fixed sheet size in function block diagrams. Each step can have a set of actions, which looks like this:

Step Actions
Step Actions

In that case, there is only one action, but each step can have multiple conditions. The user can split the sequence using branches of steps or transitions. The user wires the steps and actions using wire similar to the method in function block diagrams. A step HAS to be wired to a transition and vise-versa. Steps can’t be wired together in series neither can transitions. Steps and transitions can be wired in parallel to split sequence.

Each transition is a statement, not just a tag, and will allow the sequence to the next step when the transition yields true. Here are some examples:

Example 1
Example 1
Example 2
Example 2

The steps have built in timers using the configuration for each step. Look at the example here:

SFC Step Properties Window
Figure 5: SFC Step Properties Window

During online mode, the SFC shows the position of the sequence by putting a green box around the step that is executing. As seen in figure 4, step_000 is active.

The SFC in figure 4 does the exact same thing as the other programs do: it turns on and off the output tag for ten seconds if the trigger is pressed.

If the user is familiar with OPTO-22 programming, then they would be comfortable in this environment, as it is the same flowchart style programming.

SFC Pros:

  • Online mode offers easy debugging
  • Built-in timers for steps
  • Actions attached to steps in user specified order

SFC Cons:

  • Abstract code can get disorganized
  • Syntax can be difficult
  • Complex sequence to do simple tasks
  • Online editing is a challenge


Each method has their application and their strengths. Ultimately, the user must decide what environment they want to be in for the task they want to accomplish. It will depend on the user, and it will depend on the task. Luckily, AB controllers support all three.


For more information or to purchase a PLC module, please visit our home page here.