Career Boost: What is Private GPT? How It Improves Tech Projects
In project management and tech development, one's professional journey often intertwines with the life cycle of a project. Just as tech evolves, so do the challenges and the solutions that address them. At Mobica, we're not just about overcoming these challenges – we're about empowering our team members, our 'Mobicans', to grow, learn, and contribute to the technological advancements of tomorrow.
Augmenting Project Teams And Achieving More with AI Integration
“In this article, I will discuss the life cycle of a typical project. By detailing the various phases we go through at Mobica and the challenges we frequently face, I’ll showcase how the inclusion of an AI consultant, particularly the GPT solution, can truly make a difference. And if you're thinking of boosting your career, what better way than to join a team that’s at the forefront of leveraging AI for project enhancement?”
Introduces Wojciech Mazur, UI/UX Expert at Mobica.
At Mobica, we work alongside globally recognised brands to develop groundbreaking tech, and our handiwork can be seen in many of the incredible digital experiences people interact with daily. The importance of AI and its implementation is evident, but where you fit into this grand scheme is what truly matters.
Life of a Traditional Project at Mobica:
At Mobica, projects generally follow a sequence of phases:
Our priority is to deeply understand our customers. Our engineers investigate the tasks and gain a full understanding of the business, products, their needs and requirements.
We ensure that we understand the users. The findings are confronted with user needs, wishes, abilities and expectations to identify the underlying issue, and how to best adapt it to human factors.
We transform ideas into reality. Possible solutions are created in the forms of design directions, look & feel sketches, wireframes, mockups and prototypes. We define technology, and frameworks, or adhere to an existing project identity.
The best solutions are streamlined and perfected. We iterate by gathering input from stakeholders and end-users. User involvement, testing and metrics provide data to make informed decisions.
We care about implementation. Working door-to-door, interconnected, and in constant exchange with multiple teams we make sure the product is business-ready and provide support.
Challenges in the world of project management are two-fold: internal and external.
Internal Project Management Challenges
“One of the significant hurdles engineers face when joining a new project is the onboarding process. I recently listened to a Lex Fridman podcast where Mark Zuckerberg discussed how it might take up to 3 months for an engineer at Meta to familiarise themselves with the necessary libraries”
says Wojciech. Any strategies to streamline this period would be a massive improvement to the whole cycle.
“I've noticed a trend where customer documentation varies vastly in guidelines, styles, and standards. This inconsistency can be a real pain point for individuals trying to comprehend and work with multiple sources of information. In my observations, clients often struggle with managing roles and responsibilities when collaborating with consultants, primarily if they've never been through the experience before. Even though there are attempts to simplify the matching process, it still consumes a notable chunk of time.”
Another common internal challenge is the time constraints tech teams often deal with from project to project:
“We try to offer support on our side to ease the matching process but it also takes precious time. As is often the case, we're racing against time. Projects have their strict deadlines dictated either by the company’s plans or the market timing. Most companies can’t afford the approach known from companies like CDPR that says It’s ready when it’s ready, thus constant time optimisation is a big part of the project cycle.”
External Project Management Challenges
External factors such as regulations, international standards, technological requirements, and company-wide guidelines can be complicated and extend the project lifecycle. Our expert sheds some light on how staying updated with the most recent changes to frameworks in this rapidly shifting market seems to consume a chunk of our valuable time
“I find going through documents that validate certain certifications needed for the product is also a big task as some of them (ie. International Electronic Commission) could easily go beyond 400 pages. The ability to release properly certified devices like radars, navigation aids, and communication items pretty much always requires some of the engineers to read through extensive regulations.”
Overcoming Internal and External Challenges
When talking about internal challenges, it's essential to remember that at Mobica, ideas are prioritised over hierarchy and biases. This means that regardless of one's position, their voice matters, and their concerns are addressed.
As for the external difficulties, while they are complex, they're not insurmountable. And when you have a company culture that believes employees should be both content and inspired – giving them the flexibility to work as they deem fit, the challenges become a tad bit easier. The solutions we bring to the table are born from brainstorming sessions, collaborative events, and the desire to create and innovate, all underpinned by our commitment to our team's happiness and well-being.
Now that you know the issues we face, let’s talk about how GPT technology can help us tackle them.
What is Private GPT
In its simplest form, Private GPT is a wrapper around a LangChain framework that services a Large Language Model, managing queries sent in and out. Its main feature though is the ability to feed LLM different documents. The ingest function can inject all sorts of document formats, starting with TXT, PDF, and HTML up to Word and CSV files, so in that matter, it’s quite versatile. After ingesting a document, the text goes through a process of tokenisation and vectorisation that produces a database layer. This database works like a lens that helps the LLM to focus its answers on a proper context. It emphasises the insights extracted from the documents to understand the user query and adjusts its answers accordingly. In layman's terms - it allows you to talk to the documents you feed it with.
From Wojciech Mazur’s view,
“The ability of an LLM to perfectly process natural language pretty much turns pGPT into a fully viable project team member who knows everything about the product, knows the requirements by heart, has infinite patience, is always available and responds in real-time. In conjunction with the ability to ingest meeting notes popularly generated by most meeting software nowadays, pGPT can stay up to speed with the latest developments and take that ever-changing landscape of iterative requirements into consideration while generating answers. Just in case you don’t trust it (and you really shouldn't, more on that later) it quotes sources, so that you can go and check the data yourself. How neat is that?”
Most importantly, the pGPT can be set up locally. This is especially important in our line of work because of the secrecy needed in a competitive field of product development. Various Non-disclosure agreements are required to protect our clients from industrial espionage and competition. On why a local setup can be beneficial, he shares that
“It is quite hard to trust an external company that delivers LLM services to keep your documents secure. Although companies like Amazon, OpenAI, and Microsoft have superior LLM technology at their disposal, the inability to detach themselves from handling data and just setting things locally makes the whole process less secure. This single reason is why we now have over 10 ongoing private GPT-like projects that are being actively developed by different companies and communities, mainly in the open-source spirit.”
Hallucinations, Context Length, and Real-Time Updates etc.
The shortcomings of available LLMs manifest themselves in pGPT projects: hallucinations and context length limitations being the most troubling ones.
On hallucinations, Wojciech adds
“There is a battle between how creative the model should be versus how accurate its responses are. Funnily enough, people tend to call the model creative, when it’s producing innovative answers, but at the same time calling out hallucinations, when the model’s creativity goes beyond realism.
It is important for the solution to be something more than a PDF citing machine, which would basically turn it into a glorified search engine. Ideally, It needs to process natural language, explain complex stuff, and act as an advisor and a tutor at times. This requires the constraints to be somewhat loose… unless it’s making stuff up. With the ability to cite sources, however, we do seem to have at least some control over the quality of the output. Even if it makes it a bit inconvenient to work with.”
Context Length is a more complex problem. This parameter defines how much text can the LLM process before it forgets what it is talking about. Sadly this is more of a model’s architecture problem than a hardware problem, so you can’t solve it by simply adding more RAM into the rig it runs on. Bigger context LLMs are being trained each month, but the bigger they get the more expensive it is not only to train them, but to run them. Unless we make a substantial breakthrough in its basic architecture, this problem might stay unsolved.
Why does it matter?
“Well… there is the case in which as the project progresses and more and more documentation is being added to the database via the ingest function, the more problems the LLM has in distinguishing which data is recent, and which is obsolete. The holy grail of the pGPT functionality is the ability to scan, learn and compare two documents - understand which is the one it needs to follow and what has changed. Again citing sources is helpful here, but it just puts more and more pressure on the engineer, to the point it makes more sense to lobotomise the model and start over with updated docs now and then.”
The Future of AI: Open-Source Collaboration vs Commercial Solutions
Wojciech shares his take on the industry's current outlook on the future of AI -
“There are two options here: either wait for someone else to come up with a working solution or start developing one ourselves. History tells us that the first solution leads to commercialisation and it’s fine if we are okay with paying for it. Maybe it makes sense for some companies to buy a ready-made package. We can’t be a judge of that. What makes more sense however is working closely with the open source community. Engaging in the adventure of exploration, learning and sharing as every journey like that makes us better developers, designers, and human beings.
Rarely do we have the opportunity to be at the very edge of what’s possible with tech as more and more of the groundbreaking stuff is being worked on in the dark labs of big corporations. Looking back at the history of IT, it is inspirational to know that things like SSL were written by one guy. Java Script - created over a couple of summer weeks by a single man. Even the fundamental unit underlying the whole Large Language Model revolution (The Transformer) was created by a small group of researchers in 2017.
Open source collaboration path is a way to address that and give the innovation back to the hands of an everyday engineer.
As we mentioned earlier, training bigger LLMs won’t likely solve the problems we face with pGPT. Sam Altman (CEO of OpenAI) said that the future of solutions like ChatGPT lies not in its size but in multimodality, so it is possible that ChatGPT version N (5,6,7…) will have agency over sub-AIs acting as its aids, specialised in different tasks. Maybe that’s the way future pGPT solutions could be developed as well. Maybe another revolution will come from reworking the base unit: and the transformer and we will get to play with an entirely different architecture of future LLMs soon. Only time will tell.
In conclusion, the ever-evolving landscape of AI technology beckons us to be proactive, to be pioneers rather than mere spectators. Whether we choose the path of open-source collaboration or rely on commercial solutions, the key is to remain engaged, curious, and adaptable. After all, the next big leap in AI might just be around the corner, waiting for someone like you to discover it.”
Level-Up your Career at Mobica: Shape the AI-Driven Future
At Mobica, we're not just here to witness the future; we're here to build it.
The promise of AI and the potential it holds is evident. As we move forward, the line between human creativity and AI's capabilities blurs, opening up avenues of innovation previously unthought of.
Joining Mobica isn't just about being on a team; it's about contributing to well-known clients and projects and constantly pushing the envelope. So, if you've ever thought about how AI can improve your professional journey, you should consider joining our team.
Join us today and let's shape the future of technology!
Get started with Mobica today!
In Mobica since 2013Currently in the 10th year of his tenure at Mobica, his work focuses on the proactive capture and resolution of problems in information architecture, system behaviour and user interaction. Outside of his professional commitments, he has a keen interest in the study of diffusion models and LLMs. Although he does not code, his enthusiasm for these interests remains unwavering.
Expert UI/UX Designer, Applications Competence Centre