
We live in an age of rapid – indeed breathtaking – technical development. The breakthrough technology of five years ago may even be obsolete, today. It is a truism that laws and regulation lag technical developments.
The most obvious example of this is the autonomous car. They have been long promised and are now on the road. Driverless taxis are operating in half a dozen cities across the US and in China. If a driverless car causes an accident, there is no driver to take the blame. So, who do you sue? The passenger? The owner? The car manufacturer? The software maker? And – to reduce it to a perhaps trivial level but one that has already caused real problems for autonomous taxi companies – if a driverless electric car parks illegally while queuing up to charge, to whom does the warden give the ticket? Waymo’s driverless taxis got 589 parking tickets in San Francisco last year.
Issues of legality and responsibility in automated industrial lifting might seem less extreme, but they are just as real and will become even more so. Smart systems, Internet of Things (IoT), AI and machine learning are penetrating factories and manufacturing plants, controlling and automating all the processes within it, therefore, controlling, among other things, the lifting equipment that is a part of those processes. Even small-capacity electric hoists are candidates for automation. As Demag point out in this context, “Automation of a lifting process involves many components…An electric chain hoist is an excellent choice for the lifting axis up to 5t. For heavier loads, electric wire rope hoists can be used.” But who is responsible for the operation of such automated hoists – and for the operator’s safety?
At one level, the answer is simple – and it does not matter that legislation has yet to include any reference to AI or machine learning. LOLER, the Lifting Operations and Lifting Equipment Regulations 1998, sets the regulations in the UK and these have the force of law. That AI had not been dreamed of back in 1998 is irrelevant.
According to LOLER, then, if a worker is injured by, say, a load swinging from a manual chain hoist, or crushed by a load dropped from it, the owner of the factory is responsible. They should have ensured that the equipment was properly installed, maintained and in safe condition, and that its operator had been properly trained to operate it.

If the hoist that caused the injury was an old fashioned electric chain hoist, controlled in the traditional and transparent way by, say, an operator with a pendant push-button, with no digitalisation or smart features, then exactly the same thing is true – the owner is responsible for any injury it causes.
And if it is a fully autonomous, smart and AI-controlled hoist that causes the accident? A hoist that has learned by machine learning the most efficient way of operating, the most efficient order in which to carry out tasks, the most efficient speeds, accelerations, lifting heights and so on? Legally, and probably morally, the answer is the same: the owner of the hoist is responsible. They should have ensured that the hoist and its operations were safe.
It is here that a practical problem – though not perhaps a legal one – arises. AI is not transparent. Nobody knows how it works. It teaches itself. It takes hundreds, thousands, perhaps millions of examples, and works out the optimal efficiencies that it will perform – the working out of which is so mathematically complex that human beings cannot follow its logic. Yet human beings are still morally, and legally, responsible for its actions. The provisions of LOLER still apply as the owner of the machine must take all reasonable precautions and as far as it’s possible, make sure that the machine is not dangerous.
This matters because the AI-controlled factory is already with us, and hoist manufacturers make proud boasts of the interconnectivity of their machines. Software algorithms and AI already perform data monitoring, predictive maintenance, energy efficiency calculations and the like. Process cranes in particular operate autonomously, with no human operator, in thousands of power plants, waste plants and steel mills throughout the world. So, are the regulations and the law desperately old fashioned and out of touch that they need revision?
Playing catch up
Ben Dobbs, head of global standards and legislation at LEEA (Lifting Equipment Engineers Association), is well placed to comment on some of these issues. “I guess the overall answer is that users are not yet fully aware of the implications of such technology and the pros and cons of it,” he says. He offers the following opinions, explaining that it is a developing area and definitive answers are probably not yet available.
But, with more technological advances in overhead lifting, are there proper safety regulations in place or is a review needed? Have the regulations and legislation concerning use (and safe use) of overhead lifting equipment caught up the with advances in technology?
“They are being developed,” Dobbs says. “Machinery regulation now incorporates requirements for such things, and it will be fully enforced across Europe in 2027. User legislation, on the other hand, is much further behind and to my knowledge in most places dates back to the late 90s.
“However, it should be pointed out that the user legislation is risk based and goal setting – it states what must be achieved but says nothing about the means of achieving it. Writing it this way means that, by default, it covers all aspects, as it is up to the employer of persons using the equipment to ensure that it is safe at all times and that all personnel involved in its operation are competent for the task.”
In other words, it remains the case that it is the responsibility of the owner or employer to ensure that the AI-driven machine is safe and properly maintained and that the operators are properly trained in its use – and that is despite that little practical difficulty mentioned above.
“That an overhead hoist or crane in a factory is trained on AI and machine learning and has, in effect, taught itself, and that no human can check this for safety is a good point,” he says. “But I guess the question you should be asking is how much freedom we should give AI – and that is a debate that to my knowledge has not been discussed at user level.

“You see, if you consider fully automated systems, which are pre-programmed but can go wrong, we tend to isolate the equipment with cells, so that when say a human enters the working area and might be in danger, the system automatically stops. This is how the risk of collision with people is reduced. The automated machine does not see or care that something is in its path, so cutting power when someone does appear in the wrong place is the way to mitigate the risk.
“So in the case of AI, you would need to take control of such things away from the AI machine and have a fail-safe mechanism that ensures that no matter what the machine wants to do, there is no risk to safety.” In other words, a kill switch would make sure that the AI cannot run totally amok like HAL, the super computer in the classic film 2001: A Space Odyssey.
Dealing with liability
Whether or not such a kill switch was in place, is it the case that if an AI-controlled machine causes an accident or injury, the owner and employer is responsible – even though there is no way that the owner could have worked out in advance that the machine could cause the accident?
“Yes, they would be,” says Dobbs. “The reason would be that they have relinquished control and introduced risk. The risk assessment should identify such an eventuality and measures should be put in place to mitigate it. To give an analogy, if you have an AI hoist without maintaining [a kill switch] control, it would be the equivalent of asking an operator to drive a mobile crane without a proper braking mechanism.”
It would seem then that any person injured by an AI crane would be entitled to redress from the owner of the machine. Would the owner in turn be able to sue the manufacturer of the machine or the writer of the software that instructed the AI what tasks it should learn to perform? Have such situations, or comparable ones in other industries, been tested in law? Or is this a case, as with selfdriving vehicles, where the technology has progressed beyond the current regulations?
“Manufacturing and user legislation are two different things,” says Dobbs. “The supply legislation [that affects the manufacturer] is integrating requirements, but I am not an expert on such things so cannot comment as to the extent or suitability.” He can, however, speak on user legislation. “The user requirements are to identify and address risks associated with the conditions of use. We are all responsible for safety in the workplace. However, theoretically, if the supply legislations had AI-essential health and safety requirements that were not met by the manufacturer, and if these resulted in user injury, then yes, the manufacturer would be culpable according to their declaration of conformity to the legislation. However, this would be true of any essential health and safety requirement.” It comes down to what is possibly one of the eternal verities, or at least a fundamental principle of the law: someone is always culpable. “It may be the manufacturers for not properly addressing the essential health and safety requirements. It may be the users for not properly planning the operation.”
Are amended or updated regulations necessary to deal with all this, or is it covered by current regulations and principles? Again, it is probably too early to say. “It is possible, but until we can be sure that there is an issue there is little point writing requirements for one,” says Dobbs. “Most new requirements or changes to existing essential health and safety requirements are usually done following evidence of a problem, to prevent it occurring again. I have been involved in lifting equipment product design and use for 25 years and could literally go on forever with the ‘what ifs’ but the only thing that can really be done is to reduce any risk to the acceptable minimum.

“I don’t believe that anything can be truly eliminated and as a last resort we all need to look out for ourselves and the people we work with.”
Keeping it practical
Now that the legalities have been made clear, insofar as they can be, let us turn to practicalities. What exactly can autonomous chain hoist do, and what do you need to make them do it?
Data is essential to autonomous operation, and data is provided through sensors, so any autonomous hoist will be laden with them. It is generally true that the more sensors you have on your equipment, the more quickly the AI learns and teaches itself. AI models are trained with large amounts of data, and the more sensors you have the more data they provide. It is an ongoing process – sensors continue to collect and provide the raw data throughout their working life, and that data continues to train the AI model. In that sense AI and human operators are similar: the more experienced ones are better.
It is of course important that the sensors be of the right type and of adequate sensitivity. Laser sensing is one common technology for automating distance control. But lasers in an outdoor environment can be affected by fog, mist or bad weather. Induction-based sensors are an alternative. Metal – such as an end-stop – approaching a coil induces a current in it, and the closer the metal gets to the coil, the greater the current.
One application for distance sensors is in container terminals. Many of these are among the most fully automated lifting and material handling operations in the world. Autonomous operation is becoming the standard rather than the exception. Typically, container handling cranes operate above the storage area, positioning containers before lowering them into place. The hoist carriage on each crane travels along the length of the crane structure. Sensors fitted to the hoist carriage detect the distance to the ends of the structure. When that distance becomes small the sensors cut power to the drive motors and prevent the carriage from travelling beyond allowable limits.
Contrinex is a Swiss-based global technology leader for smart sensors for complex automations. They produce sensors of all kinds, including those for harsh operating conditions.
One automated container port that consulted them was experiencing problems. Mechanical play in the drive system of the crane carriages were resulting in the carriage drifting as it travelled along the crane structure. The sensor devices that were originally fitted had a sensing range of 15mm, but this was proving inadequate. Occasional collisions occurred with the crane structure, causing damage and interrupting operation. Replacement sensors with increased sensing range were needed.
Inductive sensors from the Contrinex Basic range were substituted. Many sensing devices now are plug-and-play (PNP), making such substitutions simple. Industry standard housings (40x40mm polyamide glass fibre housings in this case) make them drop-in replacements for the original devices so little downtime was incurred during the changeover to the new sensing arrangements. The inductive sensors devices have a 20mm sensing distance that eliminated the risk of collisions. They are available as IP68 or IP69K rated units and are ideal for the outdoor working environment of a seaport.
Communication with the crane’s control system is via a PNP changeover interface, replacing the existing two-wire arrangement; a flexible PUR-sheathed cable provides the electrical connection.
A step improvement in operational performance was evident immediately after the new units were installed. The new sensors provide reliable sensing of each hoist’s position with no reported collisions since the date of installation. The customer has reported a marked reduction in crane downtime with an associated decrease in expenditure.
Kill switch engaged
Redundancy is just as necessary in safety related applications such as AI-controlled hoists as it is in nuclear technologies.
A single sensor, or just a single technology, is frequently not enough. Even in non-safety related cases, maximum operational reliability is essential. One solution, from Czech sensormanufacturers Sick, is based on a safety controller that runs cyclical tests to check sensor detection behaviour, sensor response to entry commands and checks also that output signals are switching correctly.
Detection of obstacles – sometimes unexpected ones – along the path of industrial cranes is another essential safety feature. Here, too, Sick have suitable sensors. Their AOS104 RTG and AOS504 RMG object detection systems monitor for objects such as twist locks – for container cranes and terminals – or service vehicles, and are said to do so reliably and with no blind zones.

The AOS in the name stands for Advanced Object Detection System, and is a lidar device. lidar works on the same principle as radar – both are time-of-flight methods for measuring distances. But rather than sending out radio signals, listening for the reflected echo and timing it as radar does, lidar uses laser beams that reflect off the distant object. It measures the time of return and, from that, multiplying it by the speed of light, works out the distance.
Telemecanique is a French company that offers over many types of sensors including limit switches, pressure sensors, photoelectric sensors and proximity sensors. On an overhead gantry crane, all of these might be used. Controlling the stop and slow down of the horizontal movement can be done by a single two-way two-speed limit switch. Photoelectric sensors can produce long distance detection for anti-collision functions, and for detecting another crane or another girder on the same runway. But not all sensors have to be digital. Limit switches are an example – a simple rocker arm, that is moved by physical contact with an end-stop, can be just as effective, more robust and much more intuitively understandable.
Synchronisation of multiple trolleys, girders and hoists on a single runway is another prime example of where automation can improve safety. It need no longer depend on the skills and close attention of sharp-eyed operators. Tandem function sensors ensure that the two overhead cranes constantly detect each other’s relative position and adjust their movements accordingly. Again timeof- flight based sensors are a solution. Some such sensors contain tandem function and anti-collision functions built in; others require configuration through a PLC (Programmable Logic Controller) for example. Thus, another specialist in this sector, Sensor Partners, headquartered in the Netherlands, in the first category have their LAM 5.21. It has a switching frequency of 100Hz, a sensing range of 70m and an accuracy of plus or minus 30mm. They have a wider choice in the second category, with sensing ranges of up to 270m and accuracies down to plus or minus 3mm.
Doubtless automation will increase, even among smaller capacity hoists. It is also clear that the legalities surrounding it will gradually become clearer. It is a brave new world that we are looking at – at least, let us hope so. Some fear instead that AI will lead to a dystopian machine-controlled nightmare in which humans are irrelevant at best and eliminated at worst. To prevent it, remember to install that kill switch.