Machine learning is a branch of ar­ti­fi­cial in­tel­li­gence where computer models learn from data to make pre­dic­tions or decisions without explicit pro­gram­ming. It’s not only in­ter­est­ing for science and IT companies like Google or Microsoft. The world of online marketing can also change through de­vel­op­ments in ar­ti­fi­cial in­tel­li­gence.

What is machine learning?

Machines, computers, and programs tra­di­tion­ally follow pre­defined in­struc­tions: ‘If A happens, then do B’. But ex­pect­a­tions for modern systems are rising, and de­velopers can’t an­ti­cip­ate every possible scenario or pre-program every solution. That’s why today’s software needs to make in­de­pend­ent decisions and react ap­pro­pri­ately to un­fa­mil­i­ar situ­ations. To achieve this, al­gorithms are used that allow programs to learn. First, they are provided with data; then, they analyse it to recognise patterns and form con­nec­tions. This process is at the heart of machine learning.

In the context of self-learning systems, related terms often appear that should be un­der­stood to gain a better un­der­stand­ing of machine learning.

IONOS AI Model Hub
Your gateway to a sovereign mul­timod­al AI platform
  • 100% GDPR-compliant and securely hosted in Europe
  • One platform for the most powerful AI models
  • No vendor lock-in with open source

Ar­ti­fi­cial In­tel­li­gence

Research into ar­ti­fi­cial in­tel­li­gence (AI) focuses on creating machines that can act in ways similar to humans. Computers and robots are designed to analyse their en­vir­on­ment and make the best possible decisions. By our standards, this would mean behaving in­tel­li­gently—but that raises a challenge: we don’t even have a clear defin­i­tion for measuring our own in­tel­li­gence. At present, AI—at least as it currently exists—cannot replicate a complete human being, including emotional in­tel­li­gence. Instead, it focuses on specific cap­ab­il­it­ies to solve targeted tasks, a concept known as weak ar­ti­fi­cial in­tel­li­gence.

Since 2022, systems with gen­er­at­ive AI like the AI Assistant, such as ChatGPT, have gained sig­ni­fic­ant im­port­ance. These are based on trans­former models that are capable of gen­er­at­ing text, images, or code from massive amounts of data. However, they remain spe­cial­ised systems that do not possess true general in­tel­li­gence.

Neural networks

A branch of ar­ti­fi­cial in­tel­li­gence research, neuroin­form­at­ics, seeks further to design computers modeled after brains. It views nervous systems in an abstract way—stripped of their bio­lo­gic­al prop­er­ties and limited purely to func­tion­al­ity. Ar­ti­fi­cial neural networks are primarily not an actual mani­fest­a­tion, but rather math­em­at­ic­al, abstract pro­ced­ures. A network of neurons (math­em­at­ic­al functions or al­gorithms) is formed that can handle complex tasks like a human brain. The con­nec­tions between neurons vary in strength and can adapt to problems.

The ad­vance­ment of neural networks has led to the rise of deep learning. These are complex neural networks with many layers, which dominate today.

Big data

The term ‘Big Datainitially simply describes the massive volumes of data. However, there is no defined point at which we stop referring to data as just data and start calling it big data. This phe­nomen­on has gained increased media attention in recent years due to the source of this data: In many cases, the deluge of in­form­a­tion comes from user data (interests, movement profiles, vital data) collected by companies like Google, Amazon, or Facebook to better tailor their offerings to customers.

Such large data volumes can no longer be sat­is­fact­or­ily analysed by tra­di­tion­al computer systems: Con­ven­tion­al software can only find what users look for. Therefore, self-learning systems are needed to uncover pre­vi­ously unknown re­la­tion­ships.

Data mining

Data mining refers to the analysis of big data. Simply col­lect­ing data doesn’t hold much value on its own. The in­form­a­tion becomes valuable when you extract and analyse relevant features—similar to gold pro­spect­ing. Data mining differs from machine learning in that the former focuses on finding new patterns, while the latter em­phas­ises applying re­cog­nised patterns. Methods in data mining include cluster analyses, decision trees, re­gres­sion methods, and as­so­ci­ation analyses. Today, data mining is often part of business in­tel­li­gence systems or used in pre­dict­ive analytics to predict customer behaviour or market trends.

Com­par­is­on of machine learning methods

In general, de­velopers dis­tin­guish between su­per­vised learning, un­su­per­vised learning, and deep learning. The al­gorithms used in these methods vary sig­ni­fic­antly.

Su­per­vised learning

In su­per­vised learning, the system is supplied with examples. De­velopers specify the value that each piece of in­form­a­tion should have, such as whether it belongs in category A or B. The self-learning system then draws con­clu­sions, re­cog­nises patterns, and can handle unknown data better. The goal is to con­tinu­ally minimise the error rate.

A well-known example of su­per­vised learning is spam filters: Based on certain char­ac­ter­ist­ics, the system decides whether the email lands in the inbox or is moved to the spam folder. If the system makes a mistake, you can manually adjust it, which will influence the filter’s future cal­cu­la­tions. This way, the software delivers in­creas­ingly better results.

Un­su­per­vised learning

In un­su­per­vised learning, the program tries to identify patterns on its own. For example, it can use clus­ter­ing where an element is selected from the data set, analysed for its features, and then compared to those already examined. If equi­val­ent elements have been examined, the current object is added to them. If not, it is stored sep­ar­ately.

Systems based on un­su­per­vised learning are im­ple­men­ted in neural networks, among other things. Examples can be found in network security: A self-learning system iden­ti­fies abnormal behaviour. Since, for instance, a cy­ber­at­tack cannot be assigned to any known group, the program can detect the threat and raise an alarm.

In addition to these two main dir­ec­tions, there are also semi-su­per­vised learning, re­in­force­ment learning, active learning, and self-su­per­vised learning. These methods belong to su­per­vised learning and differ in the type and extent of user in­volve­ment. Par­tic­u­larly relevant today is self-su­per­vised learning, where systems generate learning tasks them­selves—without user in­volve­ment.

Deep learning

Unlike classical machine learning al­gorithms such as decision trees or support vector machines, deep learning uses multi-layered neural networks to process more complex data sets. These are complex because they involve natural in­form­a­tion—for example, those found in speech, hand­writ­ing, or face re­cog­ni­tion. Natural data is easy for humans to process but chal­len­ging for a machine, as it is difficult to quantify math­em­at­ic­ally.

Deep learning and ar­ti­fi­cial neural networks are closely related. The way neural networks are trained can be described as deep learning. It is called ‘deep’ because the network of neurons is organised in multiple hier­arch­ic­al layers. It starts at the first layer with a set of input neurons. These neurons receive the data, begin analysing it, and send their results to the next neural node. In the end, the in­creas­ingly refined in­form­a­tion reaches the output layer, and the network produces a value. The often numerous layers between input and output are called hidden layers.

How does machine learning work for marketing?

Machine learning already plays an important role in marketing today. However, currently, it is primarily large companies that use these tech­no­lo­gies in­tern­ally, with Google leading the way. Self-learning systems are still so new that they cannot be purchased as off-the-shelf solutions. Instead, major internet providers develop their own systems and thus act as pioneers in this field. Since some, despite com­mer­cial interests, pursue an open-source approach and col­lab­or­ate with in­de­pend­ent research, de­vel­op­ments in this area are ac­cel­er­at­ing.

Data analysis and fore­cast­ing

Marketing, alongside its creative side, also has a strong ana­lyt­ic­al dimension. Stat­ist­ics on customer behaviour are a key factor in de­term­in­ing which ad­vert­ising measures to take. In general, the larger the data set, the more valuable insights can be drawn from it. Self-learning computer programs can detect patterns and generate reliable pre­dic­tions—something humans, who naturally approach data with bias, can do only to a limited extent.

Analysts typically approach meas­ure­ment data with certain ex­pect­a­tions. These biases are nearly un­avoid­able for humans and often cause dis­tor­tions. The larger the data sets, the more sig­ni­fic­ant the de­vi­ations might be. Although in­tel­li­gent machines can also have biases because they were un­in­ten­tion­ally trained by humans, they tend to be more objective with hard facts. As a result, machines usually provide more mean­ing­ful analyses.

Cloud GPU VM
Maximum AI per­form­ance with your Cloud GPU VM
  • Exclusive NVIDIA H200 GPUs for maximum computing power
  • Guar­an­teed per­form­ance thanks to fully dedicated CPU cores
  • 100% European hosting for maximum data security and GDPR com­pli­ance
  • Simple, pre­dict­able pricing with fixed hourly rate

Visu­al­isa­tion

Self-learning systems also improve and simplify the present­a­tion of analysis results: Automated Data Visu­al­iz­a­tion is the technique where the computer autonom­ously selects the ap­pro­pri­ate rep­res­ent­a­tion of data and in­form­a­tion. This is par­tic­u­larly important for people to un­der­stand what the machine has dis­covered and fore­cas­ted. In the vast data flood, it becomes chal­len­ging to present meas­ure­ment results manually. Therefore, visu­al­isa­tion also needs to rely on the computer’s cal­cu­la­tions.

Per­son­al­isa­tion and gen­er­at­ive design

Machine learning can also impact content creation—keyword: gen­er­at­ive design. Instead of designing the same customer journey for everyone, dynamic systems based on machine learning can create in­di­vidu­al­ised ex­per­i­ences. The content displayed on a website is still provided by writers and designers, but the system assembles the com­pon­ents spe­cific­ally for the user. Self-learning systems are now also used to design or write content them­selves: With Project Dream­catch­er, for example, it’s possible to have machines design com­pon­ents. LLMs like ChatGPT can also create website texts tailored to user groups.

In­tel­li­gent chatbots and language pro­cessing

Machine learning can also be used to optimise chatbots. Many companies already use programs that handle part of the customer support via a chatbot. However, in many cases, users quickly become annoyed with con­ven­tion­al bots. A chatbot based on a self-learning system with good speech re­cog­ni­tion (Natural Language Pro­cessing), on the other hand, can give customers the feeling of actually com­mu­nic­at­ing with a person—and thus pass the Turing Test.

Per­son­al­ised re­com­mend­a­tions

Amazon and Netflix demon­strate another important de­vel­op­ment in machine learning for marketers: re­com­mend­a­tions. A major factor in these companies’ success is pre­dict­ing what a user might want next. Based on collected data, self-learning systems can recommend ad­di­tion­al products to the user. What used to be possible only on a large scale (‘Our customers liked Product A, so most will probably like Product B too’.) is now also possible on a smaller scale with modern programs (‘Customer X enjoyed Products A, B, and C, so she’ll likely also enjoy Product D’.).

In summary, it can be stated that self-learning systems will influence online marketing in four important areas:

  • Volume: Programs that operate with machine learning and have been well-trained can process massive amounts of data and thus make pre­dic­tions for the future.
  • Speed: Analyses take time—if they have to be done manually. Self-learning systems increase work speed, allowing for quicker reactions to changes.
  • Auto­ma­tion: With machine learning, it’s easier to automate processes. Since modern systems can in­de­pend­ently adapt to new situ­ations using machine learning, even complex auto­ma­tion processes are possible.
  • In­di­vidu­al­ity: Computer programs can manage countless customers. Because self-learning systems capture and process data from in­di­vidu­al users, they can also provide per­son­al­ised advice.

Other areas of ap­plic­a­tion for self-learning systems

But it’s not just marketing that in­creas­ingly relies on machine learning today. Self-learning systems are entering many other areas of our lives. In some cases, they assist in science and tech­no­logy by further advancing progress. In other instances, they are used simply in the form of various gadgets, big and small, to make our daily lives easier. The presented areas of ap­plic­a­tion are merely examples. It’s expected that machine learning will influence our entire lives in the not-too-distant future.

Science

What applies to marketing holds even greater sig­ni­fic­ance in the natural sciences. The in­tel­li­gent pro­cessing of Big Data is tre­mend­ously helpful for em­pir­ic­ally working sci­ent­ists. Particle phys­i­cists, for instance, use self-learning systems to capture, process, and identify de­vi­ations in much more meas­ure­ment data. But machine learning also aids in medicine: Some doctors are already using ar­ti­fi­cial in­tel­li­gence for diagnosis and treatment. Fur­ther­more, machine learning is used for prognosis, such as pre­dict­ing diabetes or heart attacks.

Robotics

Robots are now ubi­quit­ous, es­pe­cially in factories. They assist with mass pro­duc­tion to automate con­sist­ent work steps. However, they often have little to do with in­tel­li­gent systems, as they are only pro­grammed for the specific task they perform. When self-learning systems are used in robotics, these machines should also be able to tackle new tasks. These de­vel­op­ments are, of course, very in­ter­est­ing for other areas as well: From space ex­plor­a­tion to household use, robots with ar­ti­fi­cial in­tel­li­gence will be employed in numerous fields.

Traffic

A major showcase for machine learning is self-driving cars. Vehicles can only navigate in­de­pend­ently and safely through real traffic—not just test tracks—thanks to machine learning. It’s not feasible to program all possible situ­ations. Therefore, it’s essential that cars designed for self-nav­ig­a­tion rely on in­tel­li­gent machines. Self-learning systems are also re­volu­tion­ising traffic beyond in­di­vidu­al trans­port­a­tion methods: In­tel­li­gent al­gorithms, such as ar­ti­fi­cial neural networks, can analyse traffic and develop more effective traffic man­age­ment systems, like in­tel­li­gent traffic light controls.

Internet

Machine learning already plays a major role on the internet. One example is spam filters: By con­tinu­ously learning, filters improve in detecting unwanted emails and more reliably keep spam out of the inbox. The same applies to the in­tel­li­gent defense against viruses and malware, which better protects computers from harmful software. Also, ranking al­gorithms of search engines—most notably Google’s RankBrain—are self-learning systems. Even when the algorithm doesn’t un­der­stand the user’s input (because no one has searched for it before), it can make an educated guess about what might match the query.

Personal as­sist­ants

Even in our own homes, con­tinu­ously learning computer systems are becoming in­creas­ingly important. This trans­forms simple homes into smart homes. For example, Moley Robotics is de­vel­op­ing an in­tel­li­gent kitchen equipped with robotic arms to prepare meals. Personal as­sist­ants like Google Home and Amazon Echo that can control parts of the home use machine learning tech­no­lo­gies to un­der­stand users as best as possible. Meanwhile, many people carry their as­sist­ants with them at all times; with Siri, Cortana, or Google Assistant, users can use voice control to send commands and questions to their smart­phones.

Games

Since the beginning of research sur­round­ing ar­ti­fi­cial in­tel­li­gence, the ability of machines to play games has been a major driver for re­search­ers. In chess, checkers, or Go (perhaps the most complex board game in the world) from China, self-learning systems have competed against human opponents. Video game de­velopers also use machine learning to make their games more engaging. Game designers can employ machine learning to create balanced gameplay and ensure that computer opponents in­tel­li­gently adapt to the behaviour of human players.

The history of self-learning systems

Robots and auto­matons have fas­cin­ated humanity for several centuries. The re­la­tion­ship between humans and thinking machines has always os­cil­lated between fear and fas­cin­a­tion. However, the actual efforts toward machine learning did not begin until the 1950s—a time when computers were still in their infancy and ar­ti­fi­cial in­tel­li­gence was little more than a dream. Although theorists such as Thomas Bayes, Adrien-Marie Legendre, and Pierre-Simon Laplace had laid important found­a­tions for later research in the two centuries before, it was only with the work of Alan Turing that the idea of machines capable of learning became concrete.

Quote

“In such a case one would have to admit that the progress of the machine had not been foreseen when its original in­struc­tions were put in. It would be like a pupil who had learnt much from his master, but had added much more by his own work. When this happens I feel that one is obliged to regard the machine as showing in­tel­li­gence.”
Alan Turing in a lecture in 1947. (Quoted in B. E. Carpenter and R. W. Doran (eds.), A. M. Turing’s ACE Report of 1946 and Other Papers)

In 1950, Turing developed his now-famous Turing Test—a kind of game in which a computer tries to convince a human that it, too, is human. If the human can no longer tell that they are not speaking with a flesh-and-blood person, the machine has passed the test. That milestone was still far off at the time, but just two years later Arthur Samuel created a computer that could play draughts—and improved with every game. The program had the ability to learn. In 1957, Frank Rosen­blatt developed the Per­ceptron, the first algorithm capable of su­per­vised learning—a ar­ti­fi­cial neural network.

Today, major companies are among the driving forces in machine learning de­vel­op­ment: IBM has built Watson, a computer with an immense knowledge base that can answer questions posed in natural language. Google and Meta use machine learning to better un­der­stand their users and offer them more features.

Go to Main Menu