Computer, tell me... - how to become friends with AI

For several months, as a form of after-hours relaxation, I've been enjoying discovering another huge series from the Star Trek universe. Although my favorite captain so far has been Jean Luc Picard, I am increasingly liking the way the character of Benjamin Sisko presents the role of a leader. Yes, you guessed it right - I am watching Deep Space Nine.
However, in addition to breathlessly following how the situation develops on the outskirts of the Alpha Quadrant (a major conflict between the Bajorans and the Cardassians with the Klingons and Romulans in the background) and in the Gamma Quadrant (where the Dominion is stirring things up), I am, with growing curiosity, analyzing the approach to technology aboard the Deep Space Nine station.
And while some concepts still seem distant or even improbable (teleportation, interplanetary travel at warp speed, or time travel), some of the concepts or motifs have become strangely... familiar.
Miles O'Brien in action, or a conversation with the onboard computer
Fans of the series know the Chief of Operations on the Deep Space Nine station - Miles O'Brien. His work, in particular, became the pretext for these considerations, although in the Star Trek universe, almost everyone utilizes the advanced technologies of that time in some way - by keeping a personal log, asking for information, or performing their duties. However, no one debugs (meaning: tries to fix) as many things on the station as our engineer.

Spoiler alert!
In one episode, Miles O'Brien is tasked with checking why the onboard computer is refusing to cooperate. It turns out that responsible is a program, accidentally uploaded to the Deep Space Nine computers, originating from the Gamma Quadrant. After a long struggle, Chief O'Brien realizes that the program is behaving like a puppy seeking attention, so he isolates it by creating a virtual kennel (to be more technical - he creates a sandbox for it). Fans of the series can refresh their knowledge here.
End of spoiler
The way O'Brien tried to figure out what problem was ailing the station's computer reminded me of my struggles with a language model. Only instead of saying "Computer, check... Computer, tell me... Computer, do..." I was writing subsequent prompts for the AI. It didn't matter if I was debugging code, asking for a translation of some content into another language, trying to expand my knowledge, or doing in-depth research. The pattern was the same. I ask for something, I get an answer. Sometimes it's okay, but more often I try to extract more information. Sometimes I start to get annoyed because the AI doesn't understand my assumptions, and clarifying them doesn't help - just like O'Brien got annoyed when Cardassian technology again made it difficult for him to work with the station's equipment.
The conversations that the Deep Space Nine crew members had with the onboard computer once seemed to me solely the fruit of intellectual effort and the imagination of the screenwriters. However, when I started working more with AI, I suddenly noticed that the Star Trek universe had become slightly closer to me, precisely because of similar joys and struggles resulting from communication with the onboard computer.
Technology of tomorrow, yesterday, or... today?
One might get the impression that AI appeared suddenly, a few years ago, and the suddenness of this appearance has become the cause of another revolution. However, objectively speaking, we are talking about several decades during which enthusiasm mixed with skepticism, and subsequent small and larger breakthroughs helped add new pieces to the AI puzzle.
AI itself has a long history, and many consider the Turing test in 1950 as its beginning. This is a conventional boundary, and the development of AI itself has had its ups and downs, which I talked about during my presentation at DevJS this autumn in a talk about programming in the age of AI. However, in the consciousness of many people, AI appeared the moment it became possible for an "ordinary" user to start testing its capabilities using the version of ChatGPT-3, at the turn of 2022 and 2023. That is when AI found its way from the scientists' and innovators' rooms to our rooms, flats, houses. And although the GPT-1 model was presented back in 2018, it only gained the capabilities that began to amaze the world after several more years of work - and became the beginning of a kind of revolution.
I deliberately won't delve into speculations about whether this is a revolution or not. I feel that at this moment, we lack the perspective on the reality we are experiencing. And whether something is a revolution or not is definitely easier to assess in hindsight. Two years from the moment when the average person could become interested in ChatGPT is also a relatively short period to talk about great changes for us and for future generations. We do not have sufficient knowledge about the consequences of these changes, heck! These changes are still happening, and it is difficult to even talk about any stabilization at this moment.
However, we can take it as a given that the technology of 'tomorrow' is already within reach for many of us. At the same time, it isn't something that just dropped out of the clear blue sky - specialists have been working for years on the solutions that eventually helped build the tool we call Artificial Intelligence.
Overkill: The Race to Implement AI
There is no doubt that AI offers significant possibilities in content generation, analysis, process automation, and knowledge synthesis. It’s also hard to deny that the concept of AI made a truly spectacular entrance into the mainstream. This comes with its share of pros, but also many cons.
Many companies have started using AI over the last two years, sometimes over-eagerly trying to shoehorn the tool into every possible corner, without always actually improving the product or the user experience. Some point to the widespread use of language models as the cause of the IT industry crisis, the hiring freeze for juniors, and mass layoffs. While AI might have been one of the reasons, it likely wasn't the only one - yet AI was the one often labeled as the culprit behind the whole mess. It’s as if the impact of the market correction following the pandemic-era hiring boom, political shifts in the US, and other global political and economic changes - including the war across our eastern border—were completely overlooked.
Nevertheless, the use of language models in daily work is a topic worth paying attention to. After all, many of us can benefit from the "superpowers" latent in AI. At the same time, it’s not worth getting dragged into debates about whether AI is about to become Skynet from Terminator, take our jobs, or provide us with unlimited opportunities where only our laziness and reluctance to use AI stand as barriers. Emotions - whether excessively positive or negative - don't help us take a sensible look at the subject or reflect on where and how we can personally leverage the capabilities offered by LLMs.
AI "Masters," Doomsday Prophets, and the Onslaught of Pseudo-experts
It’s not just companies that have noticeably started reaching for solutions labeled "Artificial Intelligence." The buzz surrounding this topic has also changed things for those of us wanting to learn a bit more about AI. And woe to anyone trying to find a sensible course in this field right now. Working with the support of generative AI is still a relatively new and trendy topic. This means that while we have many valuable courses, we also have a surge of pseudo-experts capable of peddling whatever they dream up to knowledge-hungry audiences.
I’ve happened to watch webinars by such "specialists" who, instead of briefly answering simple questions, would just say: "Ah, I cover that in my paid course." If I had been swept up in the moment, I’d probably be a few hundred zlotys poorer. However, in moments like these, a red flag usually goes up - if this is what a free webinar looks like, how many times during a paid course will I hear that I need to buy another course to get an answer to the next question?
And there is no shortage of such "experts." My eyes often scan clickbait headlines promising massive time savings if only we hand over all our tasks to AI. That’s actually a more "positive" example - plenty of other headlines predict the death of various professions that AI will inevitably replace. Yet every course seems to promise infinite possibilities with minimal personal effort. And, of course, the right course - from the person advertising it. To me, it sounds a bit suspicious...
That’s why, in recent months, it’s worth being extra careful with all the courses, "mini-courses," degrees, and other materials - both free and paid - promising quick expertise in the field. The topic has become high-profile, and many people want a slice of the market pie. Soon, I’ll recommend two free courses that are worth considering. In the meantime, let's look at why AI might have caused so much market turmoil.
What Can a Hammer Do?
Language models are just - and yet, significantly - a tool. Many experts emphasize that we still don't fully understand how AI works, but it still - likely in most cases - reacts to our prompts or other settings. It has its pros and cons, and for certain tasks, specific models may perform better than others.
The advantages of using AI include significant capabilities in generating text, graphics, and sound, translating content into other languages, or writing code. LLMs can support us in content analysis and research, and they can help programmers not just with writing code, but also with debugging it. Many people use LLM capabilities to acquire new knowledge, discuss certain issues, or create things that might have previously been financially out of reach for beginners - like professional graphics, jingles, etc.
The downsides include "hallucinations" - situations where LLMs return false information. Not every language model has up-to-date knowledge or internet access to supplement that knowledge, which affects the content the model generates. The development of AI also means the development of new forms of AI-powered attacks and the flourishing of disinformation - generating deepfakes and fake content has never been as easy as it is now, which unfortunately facilitates social engineering attacks and the spread of misinformation.
Copyright is also a major issue - depending on the context, the rights might belong to us or the provider of the solution we use. Another problematic issue is what often constitutes the strength of a language model. LLMs are trained on data, materials, and works created by specific people, so we can never be sure if the model didn't rely too heavily on someone’s work, thereby infringing on existing copyrights. Interestingly, at this moment - to my knowledge - the user is responsible for potential infringements, not the provider, even though providers themselves don't always train their models legally. One only needs to recall the case of Meta training models on data without the authors' consent, using pirated sources.
Awareness of the pros and cons of using AI may, but doesn't necessarily, affect how we use it. That is also influenced by how we perceive language models and our beliefs regarding their capabilities. Another factor is the skills we possess or have already acquired through using AI. All of this determines whether we are able to use language models - our hammer - for their intended purpose (driving nails), or if we are forcing them into contexts where they make no sense (like trying to strain pasta or rice with one). There will also be those for whom the hammer is a tool for committing crimes. Meanwhile, in the background, a battle between providers for our attention (and our money) is raging.
Despite certain limitations, the AI hammer has many capabilities worth exploring. Especially since LLMs are likely here to stay. Over time, as the initial awe fades, their position in certain applications has a chance to become firmly established.
A Revolution in Thinking
Using LLMs requires, first and foremost, not just the knowledge of how to use them, but a significant shift in mindset. Knowledge of AI's capabilities alone doesn't translate into practical application. I realized this one evening when I returned to working on my website.
Jakiś czas temu LinkedIn uniemożliwił pobranie własnych wpisów z platformy. Trochę mnie to wkurzyło - w końcu umieściłam tam wiele treści, które chcę wykorzystać do tworzenia nowych wpisów. Zanim jednak je wykorzystam chcę mieć alternatywę - backup na wypadek, gdybym straciła z jakichś przyczyn dostęp do konta. Niestety, z postanowieniami dużego dostawcy usługi trudno jest negocjować.
Some time ago, LinkedIn disabled the ability to download your own posts from the platform. It annoyed me quite a bit - after all, I’ve put a lot of content there that I want to use for new posts. But before I use them, I want an alternative - a backup in case I lose access to my account for some reason. Unfortunately, it's hard to negotiate with the decisions of a large service provider.
I started considering writing a scraper, as I didn't want to spend money on existing solutions I only planned to use once. I hit a wall there. LinkedIn offers some options in that regard, but once I read the terms, I decided it wasn't worth the effort. Since the topic of scrapers is completely foreign to me, I knew I’d spend a lot of time writing such a tool without any guarantee that LinkedIn wouldn’t change something again soon. I was left with good old copy-pasting.
After three hours of slow and tedious copy (from the portal) and paste (into another markdown file), I realized that... I was struggling unnecessarily. This wasn't content that I couldn't feed into an AI. And if so, I could write a prompt that would translate the content, add the necessary tags, put it into a markdown file, and add the correct date to the filename. Voila. The next three hours flew by, the content I wanted to save "for later" appeared on the site, and I gained several hours to catch my breath.
That was the moment I felt my head - after several months of struggle - finally understand how many possibilities effective AI use can provide. How much it can speed up work - if only I know how to ask for help. I felt a bit more at ease in this dialogue with technology. I imagine that’s exactly how characters in the Star Trek universe feel when asking the ship's computer for assistance with tasks.
Computer, tell me...
If it weren't for courses, my own stumbles, and watching how others work with AI, I wouldn’t have gained enough knowledge or developed the right habits to reach for something other than a conventional solution in the situation above. A shift in thinking about what I can do with the help of LLMs requires time and practice, not just a superficial awareness of what can be done with AI.
Despite this experience, I still approach using language models with caution. I still have many questions and doubts. Some things - like writing blog posts - I still prefer to do myself. I simply love writing and don't want to deprive myself of that pleasure; I also want to offer the reader something I created myself. However, when it comes to translation, I reach for LLM help due to time constraints - otherwise, maintaining a bilingual site would be very difficult.
If you are looking for such a breakthrough experience, I have two pieces of news: bad and good. The bad - it won't happen on its own. The good - by working with AI and exploring its capabilities, you can increase the chances that, over time, such ease in using LLMs daily will emerge for you too.
On this journey, there are no simple multi-purpose prompts, no silver bullets, or miracle solutions for everything. Each of us has different knowledge and can use it differently when working with a model. It is certainly worth keeping security and legal aspects in mind regarding AI use. Even so, both fields are changing dynamically as lawyers and cybersecurity professionals alike try to keep pace with the shifts.
We are undoubtedly living in interesting times that require us to face how we perceive the world and what it offers us. However, I think it's worth giving ourselves the space to learn how to work with the help of AI. Over time, as it becomes easier to imagine how to use LLMs in our daily lives, we might feel awe, or perhaps a bit of disappointment. In time, we might grow to like AI, or even become friends with it - knowing when we can rely on AI like a friend, and when - even while remaining on friendly terms - it’s better to take what "AI came up with again" with a massive grain of salt.
As you’ve noticed, the world of Star Trek is close to my heart - many things there can be done with voice commands ("prompts") alone, yet not everything can be handled that way. Although the reality of using AI may increasingly resemble that of Star Trek, significant knowledge is still required to understand the capabilities of LLMs, their limitations, and what can go wrong, so that we can better use the tools that are likely here to stay.
A na koniec podkreślę raz jeszcze - nie ma sensu podróżowanie między nadmiernym optymizmem (bo jak widać, niektórzy uciekają od krytycyzmu i idą w zaparte z pozytywnym przekazem, pisze o tym the Guardian) i nadmiernym pesymizmem (widocznym chociażby w artykule studzącym optymistyczne nastroje pióra Bartosza Kiciora i Rafała Pikluły w Spider's web). Życie na pewno zweryfikuje i optymizm, i pesymizm, i racjonalizm. Jak zwykle w takich sprawach zalecam umiar.
And finally, let me emphasize once more - there is no point in oscillating between excessive optimism (since, as we can see, some flee from criticism and double down on positive messaging, as The Guardian reports) and excessive pessimism (seen, for example, in the article tempering optimistic moods by Bartosz Kicior and Rafał Pikluła in Spider's Web). Life will surely verify optimism, pessimism, and rationalism alike. As is usually the case in such matters, I recommend moderation.npm
