Deep Learning at SICK Linköping



Nowadays you often hear phrases such as machine learning, artificial intelligence (AI) and deep learning when reading about new technologies and startups. Many say that their solution is based on AI or deep learning tech, but what does it mean? These concepts are not something new, they have been used for a long time, but the term deep learning has become quite popular in recent years. AI is the most general term which could be described as any technique that tries to mimic human behavior. A subset of those techniques are called machine learning which gives computers the ability to learn without being explicitly programmed. This means that computers can learn and gain experience from data and make predictions on examples which have not been seen before. This is a big difference from traditional methods where the programs need to be designed and programmed for each specific task. Machine learning can also be divided into several subfields and one of these is deep learning which usually consists of deep (several layers) artificial neural networks. This concept has also been around for quite some time but it has gained popularity the last years ever since outperforming other methods in areas such as computer vision, speech recognition and natural language processing. The major reasons for its success are that large annotated training sets have been made available together with faster and cheaper hardware (in order to train the networks), and also new architectures and optimization/training techniques.

“Since we have so many products and a lot of different applications the work here is fun and diverse.”

At SICK Linköping we are working with products involving computer vision, which is a big area within deep learning. A typical example when introducing deep learning is that to decide whether an image contains a cat or a dog. Small children have no problem to distinguish between the two animals but a computer only “sees” a matrix of numbers in the image. If you try to describe in words what the differences are between a dog and cat it can be difficult since both usually are furry, have four legs, a tail etc. When it’s difficult to describe the difference it’s usually also difficult to program a computer to do that. On the other hand, computers have no problem to compare two images and decide if they are exactly the same which can be a problem for humans if the difference is small. The animal you want to detect may have different deformations, backgrounds, lightning conditions, viewpoints, occlusions etc. which also makes it difficult to program a computer to handle. You also need a specific algorithm for each new object (animal) you want to detect. With deep learning this is solved by training a model using a large amount of labeled data (images containing cats and dogs). During training, the model makes a prediction for each input and by comparing this to the correct label the model is updated in order to make an ever better prediction. The neural network has different layers where early layers usually learns to find low-level features such as edges and later layers have more abstract features that depends more on the classes. This kind of hierarchical structure makes it easier to add new classes by keeping the low-level features/layers intact and only re-train the last layers (this is called transfer learning).

“It’s good that we have research collaborations with universities, and of course it’s motivating to be a part of the automation revolution.”

At SICK we don’t have to differentiate cats from dogs but are instead usually working within factory, logistic and process automation. Here you may want to detect if a package contains dangerous goods, find a logotype, detect and read text or decide if a manufactured sample is OK or not. One problem with deep learning is that a large amount of data is needed during training but since SICK is a sensor company our products generate a lot of data. Compared to other companies SICK has the advantage that we have the application knowledge from our customers and we have all kind of sensors which can be combined and connected in order to help our customers during the Industry 4.0 revolution.


Example of detection and segmentation of codes in an image. Green corresponds to 1D code, blue corresponds to 2D code and red is background.

Before I started working at SICK I thought they only produced some specific hardware, but the major part is to make the sensors intelligent and easy to use by developing algorithms and other software. Since we have so many products and a lot of different applications (new ones are appearing all the time) the work here is fun and diverse. For me it’s also important to be able to stay up to date with the research community by reading papers and visiting academic conferences, both within deep learning and other areas. It’s also good that we have research collaborations with universities, and of course it’s motivating to be a part of the automation revolution.