Since its first public appearance at the end of 2022, generative artificial intelligence has emerged as a technological force that has astonished the world. During 2023 and 2024 this technology firmly established itself as an innovative tool in the business world.
With the arrival of 2025, it was expected that the role of this technology would remain limited to boosting the efficiency of institutions and companies. The surprise, however, is that generative artificial intelligence has gone beyond the boundaries of work to become an integral part of people’s lives, especially as it slips into the details of their everyday routines.
According to a report published by Axios and reviewed by Sky News Arabia Economy, as the generative AI revolution enters its third year, this technology is moving beyond its role as a tool for work and is turning into a daily partner that has an opinion on many aspects of personal life. It helps individuals analyse their diets, suggest possible diagnoses for illnesses, provide personalised advice, translate family conversations, even write wedding vows, choose appropriate clothing, and help people cope with grief over the loss of loved ones and organise condolences and related arrangements.
The report revealed that recent research shows the most common uses of the generative AI chatbot Claude revolve around three main areas. These are mobile application development, content creation and communication, and academic research and writing.
Together these three areas account for about 26.8 percent of the total uses of the program, while the larger portion about 73 percent is linked in countless ways to matters connected to people’s everyday lives.
Exploiting User Trust
As the trend toward using generative AI in personal life grows, privacy emerges as one of the biggest challenges and risks in this field. According to a report published by CNBC and reviewed by Sky News Arabia Economy, experts warn about the danger of individuals ignoring repeated warnings not to share their private information with AI tools, describing this behaviour as a red flag that signals potentially serious risks.
People have become increasingly comfortable supplying models such as ChatGPT, Gemini, Copilot, Apple Intelligence, and others with their sensitive data, even though these models have different privacy policies that may include using and retaining user data.
In many cases, users are not aware of this. They later discover that the algorithms of artificial intelligence have been training on their data, which were extracted without their explicit consent. This may expose them to a wide range of negative consequences.
Oxford certified AI consultant Hilda Maalouf tells Sky News Arabia Economy that in a world where the pace of life is accelerating in an unprecedented way, generative AI has become like the perfect digital friend that people turn to for help in making their daily decisions. It has become clear that this technology does not stop at offering effective solutions in the workplace, but goes further to act as a personal adviser that understands users’ needs and keeps up with their preferences in ordinary life.
Why Do People Trust Artificial Intelligence
According to Maalouf, the question that arises here is what makes people place such trust in these smart tools, to the point of relying on them in the most private aspects of their lives, such as personal advice, clothing choices, and even diet analysis. She notes that this growing phenomenon is due to several main reasons.
First, the ease and simplicity these tools provide make them a preferred option for many individuals.
Second, the speed with which data is analysed and decisions are suggested makes artificial intelligence an ideal partner in a fast paced world. These tools can process huge amounts of complex information in the blink of an eye, saving users considerable time and effort.
Third, the accuracy of AI recommendations gives users a feeling of confidence. For example, analysing diets using large and constantly updated databases can make the results appear more reliable than decisions based only on personal intuition.
Fourth, personalisation. Generative AI tools rely on algorithms that can learn from users’ behaviour and preferences, which allows them to offer recommendations tailored specifically to their needs. Whether the issue is choosing clothes or organising a daily routine, people feel that these tools understand them more deeply than any human might.
Maalouf explains that all these factors together have transformed artificial intelligence from a mere technology that helps complete tasks into a daily companion whose opinion is consulted. She points out that excessive reliance on generative AI in the flow of everyday events has begun to raise growing concern about privacy.
She adds that the risks also extend to affecting human decision making skills. This is why experts are warning people not to hand over the threads of their daily lives entirely to smart tools. The best solution in this context is to create a wise balance that preserves human independence in making decisions.
Maalouf stresses that wisdom lies in knowing the limits of generative AI. It is a powerful tool but it is not infallible. It is therefore better to use it as a supportive instrument without sacrificing our control over our own decisions or our privacy.
She says, “We must pay close attention to the information we share with it. It is better not to provide AI programs with details linked to financial information such as bank account numbers, credit card numbers, passwords, confidential information about our jobs, addresses and other personal data, because sharing such sensitive information may lead to it appearing on public servers.”
Ambiguity in How Data Is Handled
According to what Maalouf revealed in her comments to Sky News Arabia Economy, many AI companies follow an opaque approach when it comes to the data they collect.
These companies use the data that users enter into AI programs to train the models themselves and to help them generate new content. She confirms that some companies do not hide this fact but openly acknowledge it.
Many users, however, do not realise that the permissions they agreed to in the terms of use section grant companies broad rights to use their information. This approach places a heavy burden on technology firms, which may later face legal action for what they do. Therefore, companies must take strong measures that ensure customer data and privacy are protected in a safe and responsible manner.
Oxford certified AI consultant Hilda Maalouf affirms in her interview with Sky News Arabia Economy that the year 2025 will be a year of technological creativity par excellence.
She says that generative applications such as those that produce text or images will become smarter and more innovative. The healthcare field may witness a major turning point in medicine in 2025, as artificial intelligence becomes the main tool in diagnosis and treatment by analysing medical data and images quickly and with accuracy that surpasses human capabilities.
She notes that AI will also bring a revolution in cybersecurity. Security systems will be able to anticipate new threats and respond to them within seconds, providing effective protection for billions of sensitive data points against leaks. This development will also extend to smart home devices and personal assistants, which will become more capable of understanding our needs and meeting our requirements in innovative ways we have not seen before.
Maalouf believes that the greatest challenge will be how we manage this technology responsibly so that we benefit from its advantages without losing our human balance in dealing with it.




