Learning | Liam Bee

A detailed engineering workflow for PLC and HMI projects, covering how to turn drawings and requirements into assets, areas, tags, logic, testing, CFAT, commissioning, and sign off without creating unnecessary rework on the way.

A lot of PLC projects become harder than they need to be because the engineering sequence is weak. Work starts before the standard assets have been selected, hardware gets added before anybody is clear on the equipment model, HMI pages are built against temporary variables, and bespoke logic appears before the team has settled what already belongs in the standard. The same thing happens with interlocks and cross-area behaviour. Instead of being designed early and then validated later, they are still being argued about when the project ought to be finished and ready for commissioning. By the time the plant starts moving, the code may contain enough functionality to run, but the project itself is harder to understand, harder to test properly, and harder to finish cleanly because too much of the development happened out of order.

A lot of that pain gets blamed on time pressure, unclear specifications, awkward clients, late changes, or the usual reality of engineering work. Those things are real, they do matter, but there is another cause sitting underneath a lot of them and it is more controllable than people think. The sequence of technical decisions shapes how much rework the project creates for itself. If the structure is settled early, late changes still hurt but they tend to land in known places. If it is not, even straightforward decisions echo through half the codebase and the HMI because too much was built on moving assumptions.

When people hear the phrase managing a project, they often picture gantt charts, meetings, procurement, action trackers, commercial risk, and all the wider machinery around delivery. That side is real, especially on large jobs, but PLC engineers usually experience project management in a more immediate way. They see it in the order the work is done, in whether the documents are trustworthy, in whether the assets are standardised, in whether the HMI has stable interfaces, in whether the software ownership is clear, and in whether site time is spent validating a finished design rather than still finishing the development.

This is what this article is about, it is about the engineering workflow that sits underneath a PLC project and gives it shape. In my experience, that workflow is one of the biggest contributors to whether a project feels calm and easy to work with, or whether it spends most of its life recovering from disorder in its own development.

The workflow I am going to walk through is simple enough to explain and deep enough to shape an entire delivery model. If you want to jump around, the sections below link straight to the relevant part of the article.

The article still follows the same 17-step workflow from documentation through sign off, and I summarise those steps in full later on so the opening can stay readable rather than turning into a wall of numbered items.

The value of the workflow comes from the order, the boundaries between the steps, and the discipline of not skipping ahead just because it is tempting to see something moving on screen early. A strong workflow can feel slower in the first few days, then save a lot of time everywhere after that. A weak workflow often creates the opposite effect, where the early activity looks fast and the rest of the job is spent paying back that borrowed time in rework.

I also want to be clear about the kind of project this article is aimed at. It absolutely applies to single engineers and small teams as well as larger projects. I am not talking only about massive batch plants with whole teams of engineers and formal design reviews at every stage. The same thinking applies whether you are building a packaged machine, a small water treatment skid, a process upgrade, a utility system, or a multi-area plant. The scale and the formality change, but the principle does not. You still need to decide what the system contains, where the behaviour belongs, what is reusable, what is site-specific, what the HMI should point at, and what must be proven before site.

The other reason the workflow is so useful is that it shapes more than the code. It influences how engineers can work together without colliding, how SCADA development stays aligned with PLC development, how commissioning sheets get written, how maintenance staff inherit the result, and whether the next project starts from a stronger standard or from another pile of one-off decisions.

The short version: a well-structured PLC project usually moves through five layers of thinking in order.

  1. Understand the requirement.
  2. Model the system as assets, controls, and areas.
  3. Decide what comes from standards and what must be bespoke.
  4. Build the software skeleton before the detailed behaviour.
  5. Use testing and site work to validate the structure rather than invent it.

If you keep those five ideas in view, the individual steps start making a lot more sense.

This is going to be a long article by design because I want to cover the workflow in the sort of depth that is actually useful when you are planning a project rather than merely nodding at the idea of structure. We will look at the documents you need, how to split the scope, how to decide what should be standard, why libraries come early, how to layer hardware, I/O, tags, assets, and areas, how HMI and testing fit in, what changes on existing plant upgrades, where teams usually trip themselves up, and why sign off is part of the engineering structure rather than an admin detail at the end.

Why PLC Projects Become Hard To Manage

A lot of PLC projects do eventually get finished, which is exactly why weak structure can hide for so long. Once the plant is running and the handover is done, it is easy to look back and assume the workflow must have been good enough. There is still a real difference between a project that reaches operation and a project that was built in a way that stayed readable, efficient, maintainable, and reusable while it got there.

The symptoms of a weak structure are usually familiar.

  • The HMI team asks which tag is final and gets told, “Use this one for now.”
  • A pump object exists in three slightly different forms because the first copy was made before the library behaviour was settled.
  • A sequence reaches into asset internals because the asset interface was never defined properly.
  • Interlocks span areas as loose booleans because the area structure arrived late.
  • Nobody can tell whether an odd behaviour is deliberate or just a leftover from commissioning.

Projects like that often feel busy and productive because plenty is happening at once. Screens appear, alarms get added, outputs are tested, questions are answered, and blocks multiply quickly. The trouble is that the underlying model is still unsettled, so the team is engineering the plant, the software structure, and the detailed behaviour at the same time.

I do not mean everything must be frozen before anyone opens the IDE. Real projects do not work like that. Requirements move, vendor information arrives late, field reality disagrees with drawings, and some details only become obvious once you simulate or test. The aim is simply to stop late behavioural decisions from forcing unnecessary structural redesign. A bespoke sequence change should not force a rethink of the whole asset library, and a site-specific interlock should not mean editing every asset block by hand.

The biggest shift is to stop seeing the PLC project as one flat pile of programming tasks. It is a layered engineering exercise. The documents describe a system that can be modelled, split into assets, controls, and areas, aligned with standards, connected to hardware and I/O, and exposed through readable tags and interfaces. Once that structure exists, area blocks can own orchestration, assets can own equipment behaviour, the HMI can bind to stable outputs, and testing can prove the behaviour layer by layer.

That changes how the work feels as well. Engineers are usually much calmer when the structure is visible because they can see what has been settled, what is under review, what still needs building, and where a late change belongs. In an unstructured project, every edit feels as though it might affect everything else.

It also changes the quality of the conversation. Instead of saying, “We need to alter the PLC a bit,” the team can say, “This is a bespoke area sequence change,” or “This is a standard asset issue,” or “This is an HMI binding problem because the interface is unstable.” That is useful language because it turns vague discomfort into a known category of work before anybody starts editing code.

Another useful distinction here is the difference between project acceleration and project rush. Acceleration means reducing waste by making decisions in a good order. Rush means pulling downstream work forward to create the feeling of momentum even though the upstream decisions are still soft. Loading hardware once the network architecture is known is acceleration. Building half the HMI against temporary internal bits before the interface exists is rush. The two can look similar at the start, but they diverge sharply later.

If you want a useful test for whether a project is structurally healthy, ask a few very practical questions.

  • Can you explain how the system is divided into areas?
  • Can you point to a clear list of asset types and say which are standard?
  • Can you describe where an interlock between two areas belongs?
  • Can you show the tags the HMI should rely on?
  • Can you identify which parts of the project could be reused on the next job?

If those answers are vague, the project may still run, but it is probably carrying more engineering debt than it needs to.

That is the backdrop for the workflow I am going to describe. The aim is to remove uncertainty early, keep software ownership clean, make site work calmer, and leave the next engineer with something that reads like a deliberate system rather than a scramble that happened to pass testing.

Start With Documentation

The first step is acquiring documentation, but that line undersells the job. A lot of engineering trouble starts because documentation is treated as a set of files to collect rather than a system description to interrogate. You can have plenty of documents and still not have a trustworthy basis for a build. The real task at the start of a PLC project is to decide what information exists, what it means, where it conflicts, what is missing, and what you will treat as authoritative when those conflicts appear.

At minimum, most projects benefit from some combination of the following source material.

  • Functional descriptions
  • User requirement documents
  • P&IDs
  • Process descriptions
  • Electrical schematics
  • I/O schedules
  • Alarm schedules
  • Cause and effect documents
  • Network architecture
  • Instrument data
  • Device manuals
  • HMI philosophies
  • Control narratives
  • Existing source code, if it is an upgrade

Not every project has all of those, and some existing plant upgrade jobs have almost none of them in a reliable form. That is precisely why the first step is not just downloading a drawing pack and moving on. It is reading for intent and contradiction.

For example, imagine the project includes a duty-standby pump pair.

  • The P&ID shows two pumps.
  • The I/O list only includes run feedback for one of them.

That is not a small detail to sort out later. It changes what the PLC can prove, what alarms are valid, and how reliable the duty control can be. If nobody reconciles that contradiction early, it does not disappear. It just gets pushed downstream until it reappears as software confusion.

One of the most useful habits at this stage is to categorise the documents by how they will actually be used during the project.

  • Development documents: the documents you are actively building against, usually the FDS, control narrative, and HMI specification.
  • Reference documents: supporting information such as P&IDs, electrical drawings, I/O lists, network architecture, and vendor package documentation.
  • Validation and discovery sources: existing code, panel inspections, live observations, operator feedback, and maintenance knowledge, especially on existing plant upgrades.

Within that, it helps to nominate a single point of truth for development, which is often the FDS. When the I/O list, a vendor manual, or another drawing disagrees with it, the team still has an agreed basis to continue against while the query is being closed out. The discrepancy should still be logged, in Jira for example, so it can be validated properly and resolved rather than being silently absorbed into the code.

Once you have grouped the documents that way, read them with engineering questions in mind.

  • What equipment actually exists?
  • What signals exist or ought to exist?
  • Where is the feedback?
  • What modes are expected?
  • Which sequences cross area boundaries?
  • What must be alarmed?
  • Which actions are automatic and which need operator confirmation?
  • Which devices are third-party packages?
  • Which behaviour is being delegated to drives, packaged plant, or another PLC?
  • Which parts of the system already have a known standard type?
  • Which parts are one-off enough that they probably need bespoke work?

That sort of questioning starts shaping the workflow long before the first block is created.

This is also where teams lose time by being too polite with bad information. If the I/O schedule is clearly missing instrument diagnostics, say so. If the P&ID implies valve proving but the rest of the documents do not support it, surface that early. If a packaged skid is supposed to expose commands and status to the main PLC but nobody has defined the interface, flag it. Quietly accepting weak inputs does not make the work more professional. It just pushes the uncertainty into software.

On existing plant upgrades, the documentation stage often becomes a discovery stage as well. Existing code, panel inspections, live observations, watch tables, operator conversations, and maintenance knowledge may all become part of the engineering source material. That makes a clear workflow more important, not less, because you need a working engineering picture of the system before you start layering new control around it.

Document or source Why it matters What to challenge early
Functional description or control narrative Describes the intended behaviour of the plant, equipment, modes, alarms, and sequences. Vague wording, missing fault responses, unclear operator actions, and missing ownership between packaged equipment and PLC control.
P&ID or process drawings Shows equipment relationships, valves, instruments, process flow, and often hints at sequencing. Instrumentation mismatches, missing proving devices, unclear maintenance bypasses, and process paths that are not reflected in control descriptions.
Electrical schematics Shows real hardware connections, safeties, relays, drives, panel architecture, and signal paths. Signals that do not match the I/O list, undocumented hardwired logic, and missing feedback or local control arrangements.
I/O schedule Becomes the bridge between field devices, PLC addressing, tag naming, and later testing. Incomplete diagnostics, inconsistent naming, missing spare strategy, and points that exist in drawings but not in the schedule.
Cause and effect or alarm schedule Defines how abnormal situations should change system behaviour and what operators should see. Alarms without action, actions without alarm text, and inconsistencies between process intent and electrical reality.
Network architecture and device manuals Defines how hardware will actually fit together and what interface behaviour is available. Unsupported devices, missing communication details, unclear ownership of device-side logic, and hidden assumptions around diagnostics.
Existing code and live plant behaviour Essential on upgrades to an existing plant because it may be the only truthful description of what the site currently does. Behaviour that differs from the documents, temporary fixes that became permanent, and one-off patches that should not be carried into the new standard unchallenged.
Operator and maintenance knowledge Often reveals startup quirks, nuisance alarms, manual workarounds, and sequence pain that formal documents missed. Assumptions that are operational habits rather than true requirements, and undocumented dependencies that should be made explicit in the new design.

A useful output from this phase is a short working note, even if it never becomes a formal deliverable.

  • What documents you have
  • Which documents are authoritative for development
  • Which decisions are still open
  • What the main system areas appear to be
  • Which standard asset types you expect to use
  • Where you can already see interface risk or missing information

The point of that note is simple. It turns passive reading into an engineering position. It says, in effect, “This is the system I believe I am about to build, and these are the assumptions that are still exposed.”

That matters because projects often drift when different people build from different private interpretations of the same document set. One engineer thinks the asset should own valve timeout handling. Another thinks the sequence should. The SCADA developer assumes status will come from the packaged skid. The PLC engineer assumes it will be reconstructed in logic. Those mismatches are all structural, and this is the cheapest point to surface them.

The documentation stage also tells you how much of the job is implementation and how much is still definition. Some projects are mostly implementation because the scope is clear and the standards are mature. Others are definition-heavy because the operational philosophy is still moving or the existing information is too poor to anchor the build cleanly. That distinction matters because definition-heavy jobs suffer badly when teams rush into detailed logic too early.

So step one is not glamorous, but it is foundational. Before you think about hardware trees, PLC tags, or HMI screens, you want a working understanding of what the plant is, how it is supposed to behave, where the documents disagree, and what remains undecided. Without that, the workflow that follows has nothing stable to stand on.

Split The Scope Into Assets, Controls, And Areas

Once the documentation has been gathered and challenged, the next step is the one that turns the project from a document pack into an engineering model. Split the scope into assets required, controls required, and areas of the system. It is one of the most useful habits a controls engineer can build because it separates three kinds of thinking that often get mixed together too early.

An asset is a physical or logical piece of equipment that needs a consistent control object around it. A pump, a valve, a packaged skid, or a duty-standby group can all be assets. In practice, an asset is something that benefits from having a recognisable interface, known status, known commands, known alarms, and clear ownership of its own local behaviour.

A control usually sits outside the basic behaviour of the asset itself. It is the higher-level logic that turns a collection of assets into the system the project actually needs. For example, a high level stopping a pump set, a transfer sequence between two areas, or a local process sequence coordinating several standard assets would all be controls. In most projects, these controls are bespoke to the site requirement even if the way you build them relies on standard methods or dedicated library functions.

An area is the structural part of the system where those assets and controls belong. An intake station, a filter area, a chemical plant, or a conveyor zone can all be areas. Areas give the project a topography and answer a simple question: where in the plant does this behaviour live, and what other behaviour is it closely related to?

These three views overlap, but they are not the same. A pump asset may sit in one area and take part in several controls, while a transfer control may link two areas and coordinate several assets at once. A packaged chemical skid may be one asset from the point of view of the main PLC and a whole internal process from the point of view of the vendor package.

Once you make the split deliberately, you stop trying to solve all of those relationships in one mental pass.

This stage often exposes the actual shape of the job far more clearly than the drawing titles do. A document pack might suggest the project is “just a pumping station upgrade,” but once you split it properly you may find:

  • Standard motor assets
  • Standard valve assets
  • Analog loops with diagnostics
  • A packaged interface to manage
  • Local sequences
  • Cross-area links and permissives

That is a very different picture from simply saying there are some pumps and valves on the job. The split gives you the basis for workload, structure, reuse, and later testing.

I like this step because it makes hidden complexity visible without turning everything into code too soon. It gives you a better set of questions to ask.

  • How many asset types do we really need?
  • Which behaviours are truly local to an asset?
  • Which behaviours belong to the area?
  • Which controls are reusable patterns and which are site-specific?
  • Which interlocks are internal to an area and which cross a boundary?
  • Where will the HMI need a summary view and where will it need equipment-level detail?

Those questions are much harder to ask when the project is still just a pile of PDFs.

It also prevents one of the most common structural mistakes in automation work, which is accidentally modelling the software around the way the documents were issued rather than around the way the plant behaves. Document packs are often arranged by discipline or drawing set rather than software ownership, so the electrical drawings, the P&ID, and the way operations talk about the plant may all group it differently. If the software simply mirrors whichever document was open first, you can end up with a structure that is technically possible and operationally awkward.

A proper assets-controls-areas split gives you a better map. For example, it may show that two pumps on different drawing sheets really belong to one duty group asset model, that a local recovery sequence should sit inside one area, or that a chemical dosing package should be treated as a black-box asset with a defined handshake. These are structural decisions, and they are much easier to make when you are still working with the model rather than wrestling live code.

View of the scope Main question it answers Typical examples Common mistake if skipped
Assets required What pieces of equipment need consistent ownership, commands, status, diagnostics, and reusable behaviour? Pumps, valves, dampers, VSDs, transmitters, packaged skids, motor groups, duty-standby pairs. Equipment behaviour gets scattered across sequences, HMI scripts, and ad hoc bits of code with no clear owner.
Controls required What higher-level behaviours, sequences, interlocks, and operating rules need to coordinate the assets? Area interlocks, transfer sequences, process handshakes, shutdown logic, and local operating sequences. Control logic gets invented block by block without a clear picture of the full operational requirement.
Areas of the system Where should the software be divided so related assets and controls sit together sensibly? Inlet works, washwater system, chemical plant, filter area, conveyor zone, machine section, remote station. Everything ends up in one flat program structure or in folders that follow drawings rather than operational ownership.

One of the best side effects of doing this well is that it improves conversation across disciplines. Process engineers tend to think in plant behaviour and operating states, electricians in field devices and signal paths, and SCADA developers in equipment views, alarms, commands, and status presentation. Controls engineers sit in the middle of all of that. The assets-controls-areas split gives you a language that different people can work with. You can say, “This valve is a standard asset in the filter area, but the backwash control around it is bespoke,” and everybody has a better chance of understanding what is fixed and what is still under design.

There is another benefit that shows up later, especially on repeated work. Once you start splitting scope this way, you begin building a catalogue of patterns. You notice that certain asset families recur, that certain controls return with local variations, and that certain area boundaries keep producing good ownership.

So before loading hardware or writing the first FB, do this split properly. Write the asset list, write the control list, and name the areas. The format is not important. A marked-up P&ID, a whiteboard sketch, or a simple spreadsheet can all do the job. What matters is that the system becomes something you can reason about structurally rather than just react to file by file.

Decide What Is Standard And What Is Bespoke

Once you know what assets the system contains, what controls it needs, and how the areas are divided, the next question becomes decisive: what is standard and what is bespoke?

This is one of the most valuable decisions a team can make early because it protects the parts of the project that should stay reusable and makes the parts that genuinely need project-specific engineering visible before everything gets mixed together.

Standard here means there is a known way your team wants a certain asset type to behave, expose status, accept commands, handle faults, and present itself to HMI and SCADA. For example:

  • A standard pump block might separate command and feedback, expose availability consistently, and own its own start confirmation timeout.
  • A standard valve block might own open and close proving and timeout diagnostics.
  • A standard analog asset might own scaling, quality, and alarm thresholds.

Bespoke means the behaviour belongs to this project, this area, or this process requirement rather than to a reusable asset type. A local sequence, an interlock between areas, or a client-specific recovery path can all be bespoke even if they use standard assets internally.

The reason the distinction matters is that standard code and bespoke code should evolve differently. Standard assets should be protected, versioned, reviewed, and changed deliberately because they influence more than one project. Bespoke area logic should be free to solve the needs of the project as long as it respects the interfaces around it.

If those categories get mixed too early, the standard accumulates site-specific exceptions and the project becomes harder to maintain because nobody is sure which behaviours are safe to generalise later.

One practical way to think about this is to ask, for each asset type, “If I saw this same device on the next project, would I want it to behave in broadly the same way?” If the answer is yes, you are probably looking at a standard. If the answer is no, or if the real behaviour is mostly driven by a unique process sequence around it, then the bespoke logic likely belongs outside the core asset type.

Many teams delay this decision because it feels safer to “just get one version working first.” I understand the instinct, but the first working version has a habit of becoming the actual standard whether anyone intended that or not.

The first pump copied five times becomes the de facto library. The first faceplate wired to internal bits becomes the accepted interface. The first area sequence that writes straight into child internals becomes the pattern others follow.

A project may also expose that your standards are not as ready as you hoped. That is useful information. If the valve object does not expose enough diagnostics, or the analog asset does not have the right alarm model, decide whether the standard should improve before the project proceeds or whether the job needs a contained exception with a clear reason. Quietly burying workarounds in every area block is the worst outcome.

A useful test for standard versus bespoke:

  • If the behaviour belongs to the equipment type, it probably belongs in the standard asset.
  • If the behaviour belongs to the process around the equipment, it probably belongs in the project or area logic.
  • If the behaviour keeps reappearing on multiple projects, it may be time to promote it into the standard deliberately.
  • If the behaviour only exists because this site is unusual, keep it out of the reusable core unless there is a strong reason to widen the standard.

Being clear about what is bespoke also protects the standards. A standard pump object does not need to contain every site-specific transfer sequence that might ever use a pump. It needs to provide a dependable equipment-level interface that project logic can rely on.

This is also where estimating gets more honest. Standard assets usually consume effort in configuration, instancing, I/O binding, and testing. Bespoke controls consume effort in design, review, implementation, and validation. Inter-area coordination has its own effort around interface definition and edge-case testing. If all of that still sits inside one generic “PLC programming” bucket, the estimate hides where the real risk actually sits.

Before loading the library and building the project tree, be clear about what already exists in the standard and what does not. The important part is identifying the missing pieces, both in terms of assets and in terms of controls, so the project knows what can come straight from the library and what still needs project-specific engineering.

Load Libraries And Build The Project Skeleton

After the standards decision comes the part many engineers are tempted to rush because it does not yet look like a working plant on screen. Load the libraries, set the project up properly, and build the technical skeleton before getting lost in detailed behaviour. This is where the notes and models become an actual project tree.

Loading libraries early matters because library objects are not just code snippets. They carry interfaces, data structures, diagnostics philosophy, naming patterns, and a lot of accumulated engineering judgement. If they arrive late, people usually build temporary local structures to keep moving, and those temporary structures have a habit of becoming permanent dependencies for I/O mapping, HMI tags, and later logic.

It is also the right point to confirm versions. If a pump type needs updating, or a shared datatype has changed, or a standard faceplate expects a newer status structure, it is far better to discover that before the project has been built around the older assumption.

Once the library side is settled, I like to build the project from the outside in.

  1. Establish the hardware environment.
  2. Add the I/O representation.
  3. Add the tag layer that makes the I/O readable and stable.
  4. Add the area blocks that will own local orchestration.
  5. Add the asset instances inside those areas.
  6. Connect the area layer to the top-level cyclic structure, such as Main OB or the equivalent main task on your platform.

Only after that skeleton exists should the bespoke details start arriving in a controlled way. The order matters because it moves from structural certainty toward behavioural specificity.

Hardware and I/O are structural facts. The tag layer turns that structure into a usable software interface. Areas create ownership boundaries, assets give repeated equipment a stable local home, and the main task gives the project its top-level execution shape. By the time you start adding site-specific sequences or cross-area interlocks, the project should already know where those behaviours belong.

The hardware step itself deserves more respect than it sometimes gets because it sets the physical shape of the project inside the engineering environment. On projects with multiple PLCs or distributed equipment, decisions made here feed directly into later I/O mapping, diagnostics, and commissioning.

After hardware comes I/O, and after I/O comes the tag layer that handles it. I am a strong believer in giving I/O its own readable interface rather than scattering raw hardware addresses through the logic. Different platforms may implement that differently, but the aim is the same: turn raw hardware representation into clear internal names and a stable layer the rest of the project can build on.

This is also the stage where projects either become readable or start drifting into an address-driven maze. If the tag layer is weak, both the HMI and the logic start depending on whatever seemed convenient at the time. In some projects it gets worse than that, because HMI information is tied directly to instance data and later modifications become far more awkward than they needed to be.

// Example of a project skeleton before bespoke behaviour fills it out
// Names are illustrative rather than platform-specific rules

Project
  Hardware
    PLC_01
    HMI_01
    RIO_FilterArea
    RIO_ChemicalArea
    Drive_FeedPump01

  Tags
    RawIO
    IoMapped
    Commands
    Status
    Alarms

  Types
    PumpStd
    ValveStd
    AnalogStd
    AreaInterface

  Areas
    FB_Area_Inlet
    FB_Area_Filter
    FB_Area_Chemical

  Instances
    Area_Inlet
    Area_Filter
    Area_Chemical

  Main
    OB1 // or the equivalent primary cyclic task

A structure like that is not the final design. It is the skeleton that later detail can attach to without causing confusion. The exact folder names are less important than the fact that another engineer can open the project and understand the intended ownership model quickly.

Building the skeleton first also makes missing information visible while the project is still calm. If the hardware configuration cannot be completed because a remote rack is underspecified, that emerges early. If the I/O list does not support the tag model you want, that becomes obvious early. If a standard asset interface does not expose what an area will need, you discover it before a sequence has been built around the wrong assumption.

Teams that respect this stage usually write calmer software because they are not measuring progress only by how much bespoke logic exists on day three. They are also paying attention to how stable the project is becoming.

The aim is to end up with a skeleton you would be happy to explain on a whiteboard. Once that exists, the detailed work has somewhere sensible to live.

Hardware, I/O, And Tags

These three steps are often rushed together, but they do different jobs. Treating them as one blurred stage is one of the easiest ways to make a PLC project harder to read and harder to change.

  • Hardware configuration defines the physical equipment and the network topology. When you open the hardware view, you should be able to see the real installation reflected clearly.
  • I/O definition defines the signals on the PLC or remote I/O unit that owns them, so the project knows where each real channel belongs.
  • Tag handling sits above those owned channels and gives the rest of the software readable names instead of raw references.

The separation matters because PLC projects are not only about making hardware do something. They also need to represent plant behaviour clearly enough that people can read it, troubleshoot it, display it, and change it safely. If raw channels are mixed directly into area logic, the code starts reading like a panel drawing with timers attached. If those channels are translated once into meaningful project tags, the logic starts reading more like a description of the plant.

That becomes even more important on repeated or larger systems because raw addressing is unstable in a way that process behaviour should not be. Card positions change, spare points get used, remote I/O layouts move, and packaged units get integrated differently. A sensible tag layer absorbs that movement so the wider logic does not have to.

I also want requests, feedbacks, and derived statuses kept separate. A request for a pump to run is not the same as run feedback, and neither is the same as the pump being available. If those meanings are merged together, both support and HMI development become harder because people have to infer what the software is asking for and what the field device is actually doing.

Naming discipline belongs here as well. If the code says `PMP101.RunFeedback`, the engineer reads behaviour immediately. If it says `%I12.4`, a comment may help, but it is still a raw I/O reference. A proper mapping layer is better because it keeps that raw reference in one place and gives the rest of the project a stable name to work with.

Tags also become the contract between disciplines. SCADA developers, test sheets, alarm lists, and trend configuration all end up pointing at names rather than raw channels. If the tag model is stable and meaningful, that contract stays healthy.

Layer What it should own What it should avoid owning
Hardware Rack layout, networked devices, addresses, module types, communication setup, device identity. Process behaviour, operator-facing semantics, and sequence rules.
I/O definition Point identity, channel purpose, device mapping, spare strategy, relationship to documentation. Detailed control logic or assumptions that belong to assets and areas.
Tag handling Readable names, mapping, scaling, diagnostics exposure, separation of commands and feedbacks. Broad process orchestration that belongs to area logic or asset state handling.

If you are building a small machine, this may feel like overkill at first glance. Sometimes the implementation really is just disciplined naming with a couple of mapping blocks. The principle still holds, though, because even small systems become harder to modify once physical representation and behavioural logic start bleeding into each other.

On more complex plants, this layer becomes a major quality factor. The earlier it is handled properly, the less temptation there is to let every downstream block solve the same translation problem in its own slightly different way.

So when your workflow says, “Add hardware, add I/O, add tags to handle I/O,” treat that as a real engineering stage. You are deciding how the plant will be represented in software, and that representation will influence everything that follows.

Area Function Blocks And Assets

Once the hardware, I/O, and tag layers are in place, the project is ready for one of the most important structural steps in the workflow: create the function blocks that handle each area of the project, then add the assets inside those areas. This is where the abstract model becomes a real software structure with ownership boundaries.

I like area blocks because they stop projects becoming either too flat or too device-centric. If everything sits directly under the main cyclic task, the project becomes hard to navigate as a system. If everything is reduced to asset objects with no meaningful area layer, the process behaviour that coordinates multiple assets has nowhere sensible to live.

Within each area, the assets become the equipment building blocks of that part of the system. The area owns the local context. The asset owns the equipment-level behaviour. A pump asset should know how to be a pump. The area should know why that pump is needed here, when it is requested, and what its status means for the rest of the process.

In a strong structure, the asset instance handles a few things consistently:

  • SCADA-facing interface
  • Requests for outputs
  • Basic functionality and local behaviour

That means the area does not need to worry about the contactor coil directly. It worries about the process decision to request the asset and about the consequences of the asset not responding. The HMI can point at the asset interface, and the area sequence can command the asset without caring about raw terminals.

There is a testing advantage here as well. When area blocks and assets are cleanly separated, you can prove the asset type in isolation, then the asset instances with real I/O, then the area behaviour that coordinates them, and finally the inter-area links. If everything is tangled together, every test becomes a whole-system test whether you wanted it to be or not.

Area blocks also help with collaboration because one engineer can work on the chemical area while another works on inlet works, provided the inter-area interface is clear. An HMI developer can build screens against stable asset and area tags, and a library engineer can improve standard asset types without fighting a flat project structure.

At the top of this layer sits Main OB, or whatever your platform calls the main cyclic program. I prefer that top level to call area instances rather than dozens of asset instances directly. When somebody opens the project, the first thing they should see is a map of the plant or machine rather than a long shopping list of devices.

// Simple illustration of area-first orchestration

OB1
  Area_Inlet();
  Area_Filter();
  Area_Chemical();
  Area_Sludge();

// Inside Area_Filter
  Pump_FilterFeed_01();
  Pump_FilterFeed_02();
  Valve_Backwash_Inlet();
  Valve_Backwash_Outlet();
  Seq_Backwash();
  Interlocks_FilterLocal();

A pattern like that matters because it makes the structure visible. A person opening the project can see quickly that the top level is area-oriented and that the local equipment and sequences live under those areas. That is valuable during reviews, during commissioning, and months later when the next modification arrives.

This stage also forces a healthier set of design questions.

  • What does the area interface need to expose upward?
  • What local status should be available?
  • What alarms are area-level rather than equipment-level?
  • Which sequences belong here and which should remain outside?
  • Which assets are true children of this area and which are shared or cross-boundary resources?

Those questions are design work, and this is where they can be answered with clarity rather than as afterthoughts. Area blocks should not become anonymous containers where every awkward piece of project logic gets dumped once people run out of better ideas. They should hold local process ownership, not miscellaneous leftovers.

When this layer is healthy, the project starts reading like the real system. That is a strong place to be before the bespoke control details start arriving in force.

After the skeleton exists and the area and asset layers are in place, the workflow moves into the stage people often think of as the real programming. This is where bespoke controls are developed in the areas and where interlock and control links between areas are created.

A lot of engineering hours are spent here, but the reason this stage becomes productive now is that the structural questions have already been answered. This is where the project starts expressing its unique process behaviour, whether that is a wash sequence, a transfer routine, a dosing strategy, or a vendor-specific interlock.

The phrase “as and when they are needed” matters because bespoke behaviour should be developed against the real structure of the area rather than guessed in full before the area exists in software. Once the assets, interfaces, and tag surfaces are there, you can build the local sequences and rules with much better judgement because you know what the area already owns and what it can rely on.

This stage benefits from a discipline that many teams learn the hard way: project-specific logic should consume asset interfaces, not invade them. If the area sequence needs Pump 1 to run, it should issue a request or command through the pump’s interface and evaluate the returned status. It should not reach into the pump internals to energise outputs, suppress internal faults, or bypass local state handling. The more bespoke logic reaches through those boundaries, the more the structural clarity of the project starts collapsing.

Inter-area links deserve even more care because they are a common source of invisible complexity. Any time one area depends on another area’s readiness, level, mode, permissive, alarm condition, or sequence completion, you have a boundary that needs to be designed deliberately. These boundaries are often where people fall back on quick global bits. A boolean appears, gets used in three places, gains a second meaning, and becomes hard to explain later. I would much rather see inter-area links treated as explicit interface signals or handshake states with documented meaning.

For example, if a chemical dosing area depends on filtered water availability from a filter area, do not just scatter a loose permissive bit with a vague name. Decide what filtered water availability means.

  • One pump healthy?
  • Minimum pressure proven?
  • A sequence complete?
  • Local area in Auto?
  • No active shutdown?

If the meaning is important enough to control plant behaviour, it is important enough to define properly. The same applies to requests that flow the other way. If one area asks another to start or hold a process, that request needs a known contract.

One useful side effect of reaching bespoke logic at the right point is that complexity stops hiding. Some areas stay quite light because most of their behaviour lives inside standard assets, while others clearly carry the project-specific burden. That is useful for testing, peer review, operator explanation, and site preparation because the demanding parts of the job become visible early rather than only revealing themselves during commissioning.

Modes are a good example of this. Bespoke logic becomes messy very quickly if the project has not decided what the area boundary should do with Auto, Manual, Local, Maintenance, Simulation, or Out of Service. When the asset model already owns local mode handling sensibly, the area can focus on what those modes mean for process orchestration. If that ownership is blurred, the bespoke layer fills up with edge cases that should have been settled elsewhere.

A healthy project also lets you describe the bespoke logic in plain ownership terms.

  • The filter area owns the backwash sequence and commands standard pump and valve assets through their interfaces.
  • The sludge area contains the bespoke duty strategy around otherwise standard transfer assets.
  • The inter-area link between inlet and dosing is a defined readiness handshake.

Descriptions like that are useful because they tell you what the logic does and where it lives. That clarity pays back in testing as well. When bespoke logic sits inside known area boundaries and uses known asset interfaces, you can usually simulate a lot of it before site by stubbing statuses, triggering requests, and checking transitions. If the same logic depends heavily on raw I/O or direct access into child blocks, the test surface becomes much dirtier and much less reusable.

This stage is also where the team’s programming standard proves whether it is actually helping. A strong standard keeps the project-specific logic focused on the real process requirement. A weak standard forces the bespoke layer to compensate for missing asset behaviour, poor interfaces, and unstable tag models, which is one more reason the earlier steps matter so much.

By the time you reach this stage, the project is finally becoming specific to the site, but it is doing so inside a structure that can still be explained. That is the payoff for the earlier work.

Build The HMI Against Stable Variables

HMI development has a habit of exposing whether the PLC project structure is genuinely healthy. If the interfaces are clear, the screens come together around stable commands, statuses, alarms, and states. If the interfaces are weak, the HMI becomes a search through temporary bits and half-named internals.

That is why I like HMI development to happen against tags and variables that are already part of the intended interface model. The HMI should not have to guess the software architecture. It should be able to consume it. If an asset exposes mode, availability, fault status, current state, commands, and key process values consistently, the HMI gains a strong base. If an area exposes local sequence status, inter-area readiness, and operating state clearly, the plant view on screen can match the PLC’s own understanding of the process.

Developing the HMI too early against unstable internals is one of the most expensive forms of false progress on a PLC project. Screens appear, the client can click around, and everybody feels that the project is moving. Then the PLC interface changes because the asset model improves, the area logic is reorganised, or the raw I/O mapping is cleaned up, and suddenly the HMI is full of broken bindings or awkward compatibility shims.

The HMI also benefits from the same standard-versus-bespoke distinction as the PLC project. Standard faceplates should be fed from standard asset interfaces. Bespoke area pages should consume area-level tags and statuses that genuinely belong to that process view. If the PLC is structured well, the HMI can mirror that structure rather than inventing its own parallel one.

One of the best questions an HMI developer can ask is, “What should the operator understand from this screen without interpreting raw signals?” That question is deeply connected to the project workflow. If the asset layer already owns its behaviour and the area layer already owns local orchestration, the answer becomes much easier. The operator should be able to see:

  • What the equipment is doing
  • What it is being asked to do
  • Whether it is available
  • Why it is waiting
  • What has faulted
  • What actions are allowed in the current mode

Those are interface questions as much as screen-design questions. Alarm development belongs in the same conversation. Equipment-level alarms should usually come from equipment-level logic, and area-level alarms should usually come from area-level conditions. If the alarm list is being pieced together from scattered temporary bits because ownership is unclear, support will feel that later even if the alarm count looks complete on paper.

Trending, permissive displays, maintenance views, and diagnostic pages also improve when they consume stable variables. Good HMI engineering is often a reward for good PLC structure.

The HMI team does not need to wait passively until every PLC detail is finished. A lot of useful work can happen in parallel once the interface assumptions are strong enough. Standard faceplates can be wired to standard asset structures, navigation can be laid out, and page patterns can be reused. The important thing is that the screens are anchored to variables that are meant to survive.

HMI development can also act as a healthy design review of the PLC interfaces. If the HMI developer struggles to explain what an asset exposes, or cannot tell whether a status belongs to the asset or the area, that is useful feedback. The interface still needs work.

Projects often underestimate how much rework bad HMI binding creates. That is why I would rather build the HMI against a stable project model slightly later than build it against unstable internals slightly earlier.

Simulate And Test Before Site

Simulation and testing deserve their own place in the workflow because they change the role of commissioning. A project that has been simulated and tested sensibly goes to site ready to validate a finished design against the real plant. A project that has not been tested properly goes to site still carrying too much unfinished thinking, and that is where expensive decisions start getting made under pressure. Those are completely different experiences.

When I say simulate and test, I do not necessarily mean a perfect digital twin of the whole plant. On some projects that is worthwhile. On many projects it is not realistic. What I do mean is that the software should be exercised by layer before site.

  • Standard assets should be proven against expected commands, feedbacks, faults, and mode transitions.
  • Area logic should be tested against stubbed asset statuses.
  • Inter-area links should be proven against known interface conditions.
  • HMI objects should be checked against the stable tags they are meant to consume.
  • Alarm paths and permissive behaviour should be validated as far as the available environment allows.

One of the advantages of the workflow described in this article is that it naturally supports layered testing. Because the project already distinguishes hardware, I/O, assets, areas, and inter-area links, you can prove each layer in a useful order. You do not need to throw the whole unfinished system into a single giant simulation and hope. Instead, you can ask targeted questions such as:

  • Does the standard valve object behave correctly?
  • Does the filter area backwash sequence react properly to expected child statuses?
  • Does the chemical area expose the right readiness state upward?
  • Does the HMI present the area state correctly?

That is the real value of the structure. It makes structured testing possible. This is also where written test thinking starts paying off. A good project usually has some form of internal test checklist or commissioning sheet before it reaches CFAT, and it should reflect the real ownership model of the project.

Simulation is especially valuable for the awkward parts of behaviour that are easy to forget under time pressure. For example:

  • What happens if a valve never proves open?
  • What happens if a pump request is present but the asset remains unavailable?
  • What happens if an area loses an upstream permissive mid-sequence?
  • What state does the HMI show during a transition rather than just at the stable endpoints?

Those are exactly the questions that cause expensive site delays when nobody asks them early enough. Testing before site also changes the quality of team communication because problems found in simulation are easier to discuss than problems found with a live plant waiting.

There is also a confidence benefit that should not be underestimated. Teams who have seen the project behave in a controlled environment go to site differently. They still expect surprises, because site always contains some, but they are not carrying the same level of structural uncertainty. They know whether the asset model broadly works. They know whether the HMI is wired sensibly. They know whether the core sequences have at least been exercised. That confidence usually results in better decisions when the inevitable field discrepancy appears.

I also think simulation is where weak interface design reveals itself. If it is hard to simulate an area because the area relies on too many raw signals or hidden child internals, that is a design smell. If it is easy to drive the area through clean asset statuses and requests, the structure is probably healthier. Simulation proves the behaviour and exposes whether the architecture is clean enough to test properly.

Some engineers resist formal testing because they feel they can “see it in the code.” Experience helps, of course, but even strong engineers miss things when behaviour remains mostly implicit. Explicit testing is not a sign that you do not trust the code. It is a sign that you understand code is only one representation of the system and that behaviour deserves to be observed directly before it reaches site.

So simulate and test as a real stage of the project, not as something you might do if time allows. Even a modest but deliberate internal test pass can save a surprising amount of site pain, especially if the project structure already supports testing by layer.

CFAT, Commissioning, And Sign Off

The last three steps in the workflow are often spoken about as if they were one general blur of final project activity, but CFAT, commissioning, and sign off are different jobs with different purposes. A lot of delivery frustration comes from letting them collapse into each other.

I am using CFAT here in the common sense of a customer or client-facing acceptance stage before site, often based on simulation, emulation, or a controlled office test environment. Different organisations use FAT and CFAT terminology slightly differently, but the important idea is the same. There should be a stage before live plant work where the client or wider project team can see the intended behaviour, comment on it, and witness that the software broadly aligns with the agreed documents and operating philosophy.

CFAT is valuable because it moves certain disagreements earlier. Operators may realise the HMI needs a different summary view. Process engineers may spot that a hold condition should behave differently. The client may want a manual intervention point made clearer. Those are much better conversations to have in a controlled environment than during live commissioning windows.

Commissioning, by contrast, is where the finished software meets the actual installation. Real wiring, instruments, drives, safeties, local controls, process behaviour, and operators all arrive here. Commissioning will still reveal field realities because no site is identical to the documents, but the PLC development should already be complete before that stage starts. If the project is still deciding its own architecture at commissioning, site time is carrying design work that should already have been settled.

The healthiest commissioning periods I have seen are the ones where most of the software questions are already reduced to field validation and sensible tuning. Honest site questions look more like this:

  • Does the transmitter scale match reality?
  • Is the valve feedback reliable?
  • Does the packaged skid really expose the status it promised?
  • Do local selectors behave as documented?
  • Are the trip delays appropriate in the real process?
  • Are the HMI texts clear enough for the actual operators?

Those are very different from questions such as “Where should this whole behaviour belong?” or “What does this pump object even expose?”

Sign off is the third job, and it deserves more respect than it usually gets. A signed project is not simply one that runs. It should have:

  • A coherent as-built record
  • Source code that matches the installed system
  • Taken and stored backups
  • Clear version identity
  • Known and bounded outstanding snags
  • Captured alarm changes
  • Reflected HMI edits
  • A handover that does not rely entirely on verbal memory from the last day on site

One of the quiet failures in project delivery is when teams treat sign off like a commercial formality rather than the final act of engineering housekeeping. The project may achieve practical completion, but if the documentation, source files, backups, and known deviations are weak, the next engineer inherits a support problem rather than a finished system.

Keeping these three stages separate also improves decision quality. During CFAT, a client-driven behavioural change can still be evaluated as a design change. During commissioning, a field reality issue can be treated as a commissioning resolution. During sign off, a remaining issue can be listed honestly as an open item rather than being silently absorbed into the background.

In practical terms, I want each of these stages to produce something tangible.

  • CFAT: witnessed behaviour and a clear list of comments or snags
  • Commissioning: proven field behaviour, resolved site issues, updated settings, and documented changes
  • Sign off: a trustworthy as-built package and a clear handover state

If those outputs are vague, the stage probably was too. Do not let the end of the project become one undifferentiated rush. CFAT, commissioning, and sign off all deserve their own place.

Existing Plant Upgrades And Packaged Equipment

Everything I have described so far applies cleanly to new projects, but it is just as relevant on the awkward jobs people actually inherit. Existing plant upgrades, phased shutdowns, third-party packaged systems, undocumented legacy panels, and half-standardised codebases all make the workflow harder. They do not make it optional.

The worse the starting point, the more valuable the workflow usually becomes. If you inherit a site with incomplete documentation, the first stage is not to skip straight to software. The first stage becomes creating a working document picture from every source available. You are still acquiring documentation. It just happens to include detective work.

Existing plant upgrades also make the assets-controls-areas split even more important. Legacy systems often contain hidden area structures and hidden asset models, but they are encoded as folders, naming habits, copy-pasted blocks, or control panels rather than explicit software ownership. If you do not pull those patterns into the open, the upgrade risks carrying forward accidental architecture that nobody actually chose.

One of the most useful things you can do on an existing plant upgrade is distinguish between three kinds of existing behaviour.

  • Behaviour that must be preserved
  • Behaviour that is tolerated but should be improved
  • Behaviour that was a workaround and should not survive the rebuild

Without that distinction, teams either become too conservative and preserve too much mess, or too ambitious and rewrite more than the site can safely absorb. The workflow helps because it makes those decisions visible at the model stage rather than only during code conversion.

Packaged equipment brings its own version of the same challenge. A skid vendor, burner package, analyser system, or OEM machine may present itself as a mixture of black-box behaviour and low-level signals. I would rather treat packaged systems as assets with carefully defined boundaries, even if the internal behaviour is largely vendor-owned. That keeps the main project cleaner and makes commissioning conversations much sharper.

Another real-world complication is phased implementation. Sometimes you cannot stop the whole plant. One area is upgraded this shutdown, another next quarter, another after a production campaign. In that situation, the workflow becomes a staging tool. The system model can still be split into assets, controls, and areas, and the standards question becomes even more important because the project needs clean temporary boundaries between new and old behaviour. Phased work punishes vague ownership very quickly.

Legacy codebases also make people underestimate the value of sign off. If the old project had poor close-out, the new project often begins with avoidable uncertainty. That is a strong argument for finishing the new one well. Existing sites remember documentation quality for years. A clean close-out after a difficult upgrade is not just administrative tidiness. It is part of how the site regains confidence in change.

Existing plant work also teaches a softer lesson. Good workflow is not about forcing a perfect new architecture on a plant with twenty years of history. It is about choosing which parts of that history to respect, which parts to contain, and which parts to stop repeating.

The messier the job, the more you need a disciplined way to turn that mess into a model, protect the standards that matter, contain the bespoke complexity, and leave the site better documented than you found it.

Working This Way Across A Team

The workflow becomes even more valuable once more than one engineer is involved because structure is what allows a team to move in parallel without constantly breaking each other’s assumptions. On a solo project, a good structure mainly helps your future self. On a team project, it helps everybody immediately.

One of the first benefits is work allocation. If the project has clear areas, engineers can take ownership of those areas. If standard assets are already defined and protected, one engineer can refine the library while others instantiate and configure it. If the HMI is built against stable interfaces, SCADA and PLC work can progress with less friction.

A team still needs coordination, but the coordination becomes more intelligent. Instead of endless generic sync conversations, the team can focus on real interface questions such as:

  • Has the standard pump interface changed?
  • What does the chemical area expose upward?
  • Which area owns the shutdown request to the transfer system?
  • Which tags are final for the HMI faceplate?

Those are much better coordination topics than “What are you changing in the PLC today?” Shared standards are central here as well. If two engineers define alarms differently, name statuses differently, or expose asset readiness inconsistently, the project loses a lot of the value of its structure.

Code reviews also improve when the project is structured well. A reviewer can ask whether a new behaviour belongs in the area or the asset. They can check whether a bespoke rule is reaching through an interface it should respect. They can see whether the HMI variable is part of the intended contract or just a convenient internal bit. Reviews become architectural as well as syntactic, which is where many of the important issues actually sit.

Documentation within the team becomes easier too. An engineer writing commissioning sheets for one area already knows the asset list, the local sequences, and the local interfaces. The lead engineer can judge progress by area and by standard-completion rather than by vague impressions of how many blocks have appeared.

On larger projects, another advantage appears. Teams that work with a clear workflow usually generate better feedback into their standards because they can tell whether a problem belongs in the library, the area structure, or a local project decision.

Even conflict becomes easier to resolve in a well-structured project. If two engineers disagree about where something belongs, they can frame the disagreement against the workflow.

  • Is this equipment-level behaviour or area-level behaviour?
  • Is this a standard issue or a bespoke issue?
  • Is this an interface change or a local implementation detail?

Those questions reduce personal friction because the debate is about the model, not about who happened to touch the code first.

All of that becomes especially valuable at handover. A team who shared a clean structure can explain the result more coherently to maintenance, operations, and the next engineering team because what gets handed over is not just a set of source files. It is a model of the plant expressed in software.

So while this article can absolutely help a single engineer structure a project better, it becomes even more powerful as team discipline.

Common Mistakes That Create Rework Later

By this point the workflow probably sounds reasonable, but it is still worth spelling out the mistakes that usually undermine it because most of them do not look disastrous when they first happen. They often look efficient at first, which is why they survive long enough to create rework.

  • Starting with code before the scope has been modelled properly. Early code starts hardening assumptions about assets, areas, interfaces, and responsibility before those assumptions have really been examined.
  • Building bespoke logic before the standard assets are settled. Project logic ends up solving missing standard behaviour locally, and by the time the library is ready the bespoke layer has already baked in the wrong expectations.
  • Driving outputs directly from area logic. It feels quicker, but it blurs the boundary between process orchestration and equipment ownership and leaves no clean place for diagnostics, local modes, and fault handling to live later.
  • Letting inter-area dependencies remain informal. A loose permissive bit is easy to create and surprisingly expensive to live with because its meaning expands and nobody is quite sure what it really implies.
  • Developing the HMI against unstable internals. This creates the illusion of progress early and a steady stream of rework later when the PLC interface changes.
  • Treating simulation as optional and site as the real test bench. The avoidable problem is when site becomes the place where the project learns its own structure for the first time.

Finishing the software but not finishing the project. A system can run and still be poorly closed out. Common signs are:

  • Source code does not match the as-built system
  • Backups are unclear
  • Alarm changes are not recorded
  • HMI tweaks exist only on the live panel
  • Outstanding issues are known verbally but not documented

That kind of weak sign off becomes tomorrow’s uncertainty, especially on existing sites with a long maintenance history.

The deeper pattern behind all of these mistakes is the same. The project starts collapsing layers that ought to have been kept separate. Structure and behaviour get designed at the same time, standard and bespoke get mixed together, asset and area ownership get blurred, and office testing and site validation get collapsed into one stage.

These mistakes are not mysterious. Once you can spot them, they are often preventable, which is really the value of having a workflow in the first place.

The Full Workflow, Step By Step

At this point it is worth putting the whole workflow into one place with a bit more detail about what each step is really trying to produce. A strong workflow is not just a to-do list. Each step should leave the project in a better-defined state than before. If it does not, the step probably happened too quickly or too vaguely.

Step Main purpose What should exist when it is done Typical risk if skipped or rushed
1. Acquire documentation Understand what the plant is supposed to be and where the information gaps are. A challenged document set, known assumptions, and a realistic view of open questions. The project starts from contradictions nobody has surfaced yet.
2. Split into assets, controls, and areas Turn the documents into an engineering model that can guide structure. An asset list, a control list, and a clear area breakdown. The software mirrors the drawing pack rather than the behaviour of the plant.
3. Decide what is standard and bespoke Protect reuse and identify where project-specific design really belongs. A clear view of standard asset types, project-specific logic, and interface ownership. Site-specific behaviour leaks into the reusable library, or weak standards get patched locally everywhere.
4. Load libraries into the project Bring the intended standards and datatypes into the project before local placeholders appear. Approved library content and known interface versions available in the project. Temporary local code becomes the accidental standard.
5. Add hardware to the project Establish the real physical and network structure in the engineering environment. Configured PLCs, remote I/O, drives, networks, and device structure. I/O and diagnostics are built on weak or moving hardware assumptions.
6. Add I/O to the project Represent the real plant signals and channels in a controlled way. Known input and output points tied back to the documentation and hardware. Signals are handled inconsistently and become hard to trace later.
7. Add tags to handle I/O Create a readable software surface above the hardware representation. Mapped, named, and where necessary interpreted signals suitable for logic and HMI. The program becomes full of raw addressing and unstable assumptions.
8. Add FBs to handle each area Create ownership boundaries for local orchestration and local interface behaviour. Area blocks or modules that reflect the operational structure of the plant. Logic becomes flat, hard to navigate, and difficult to allocate across a team.
9. Add assets in each area Instantiate the equipment layer that owns SCADA, output requests, and basic functionality. Standard or approved asset instances inside the correct area blocks. Area logic reaches down to raw outputs and loses equipment ownership clarity.
10. Add areas into Main OB Give the top-level software a clear plant-oriented execution shape. Main cyclic logic that reads like the system rather than a list of devices. The top level becomes flat and unreadable, with no obvious plant map.
11. Develop bespoke controls in areas Add the project-specific behaviour once the structure it depends on is stable. Local sequences, special rules, and area logic built around known assets and tags. Bespoke behaviour grows in the wrong layer and becomes hard to test.
12. Develop interlock and control links between areas Handle cross-boundary dependencies explicitly rather than informally. Defined inter-area requests, statuses, and permissives with known meaning. Loose booleans and vague dependencies create hidden complexity.
13. Develop HMI against tags and variables Build the operator layer on the intended software contract. Faceplates, area pages, alarms, and diagnostics tied to stable interfaces. The HMI becomes dependent on temporary internal variables and requires heavy rework.
14. Simulate and test Prove behaviour by layer before site introduces real-world pressure. Test evidence, resolved software issues, and better confidence in the structure. Commissioning becomes the first honest test of the software architecture.
15. CFAT Show the intended behaviour to the client or wider team in a controlled environment. Witnessed behaviour, captured comments, and agreed pre-site changes. Operator and client misunderstandings arrive late and expensively at site.
16. Commissioning Validate the software against the real plant and resolve genuine field issues. Proven I/O, tuned settings, validated sequences, and working live behaviour. Site time gets consumed by basic software structure problems that should have been solved earlier.
17. Sign off Close the engineering loop and leave a trustworthy system behind. As-built source, backups, updated documentation, resolved or listed snags, and clear handover. The project runs, but the next engineer inherits uncertainty instead of a finished deliverable.

If you read down that table, one thing should stand out. The workflow is really a sequence of reducing uncertainty. Each step narrows what the project still does not know, which is why rushing steps tends to be expensive.

It is also worth noticing that the steps alternate between structural work and behavioural work. Documentation, modelling, standards, libraries, hardware, I/O, and area skeletons are structural. Bespoke controls, HMI, testing, and site work are increasingly behavioural. Structure first, behaviour second is the broad pattern because behaviour becomes easier to own once the structure it lives in is already there.

What Good PLC Project Management Looks Like

After all of that detail, the most useful way to close is probably to come back to the idea of project management itself. In controls work, good project management is often quieter than people expect. It is the discipline of choosing a good order for technical decisions, protecting the standards from being quietly rewritten by convenience, refusing to let the HMI get built on temporary bits, and asking early whether an interlock belongs inside an area or between areas.

It also requires restraint. Good teams do not try to solve every decision immediately, push every project-specific behaviour into the asset library, treat every incomplete document as usable, or mistake an early screen for real progress.

When people say a PLC project felt well run, they are often describing the effect of this kind of structure even if they do not use those words. The job felt readable, changes went to sensible places, and the HMI and PLC agreed with each other. Commissioning was still hard work, but it was not full of unnecessary architectural surprises.

That is the standard I would aim for. Not perfection, and not a fantasy where site never teaches you anything. I mean a project whose structure absorbs reality without collapsing under it, where documents become a model, the model becomes a software skeleton, and the finished system still reads like a set of engineering choices rather than a pile of survival edits.

If you are new to this way of working, the simplest place to start is not with a grand methodology document. Start with your next project and make sure four things are done properly before the detailed logic takes over.

  • Challenge the documents.
  • Split the scope into assets, controls, and areas.
  • Decide what is standard and what is bespoke.
  • Build the project skeleton before the bespoke detail.

Those four steps alone will improve a surprising amount. If you already work this way in parts but not consistently, the next step is usually standardisation. Clearer asset interfaces, earlier library protection, a better HMI contract, more explicit inter-area links, and test sheets that match the project structure will all pay back quickly.

If you are leading a team, one of the best things you can do is make this workflow visible. Show people the order, explain why it exists, review work against it, and encourage feedback into the standards. Teams get much more capable when the model they are working to is explicit rather than assumed.

The systems we build are physical systems with real consequences. Because of that, the workflow behind a PLC project deserves more respect than it often gets. It is part of the quality of the finished system, not just the route to finished code.

So if you want one takeaway from this article, it is this: structure the project early enough that behaviour has somewhere sensible to live. Do that, and a lot of later engineering gets easier.

Leave a Reply

×