We’ve been utilizing Atlassian products to streamline system and service support processes for our clients for a considerable amount of time.
This solution has proven to be a reliable and effective tool, enabling us to:
- Centralize Task Management: With Jira, we establish a single hub for registering, tracking, and processing all support requests. This ensures team transparency and cohesion, guaranteeing that no request goes unnoticed.
- Enhance Work Efficiency: Jira empowers us to automate routine tasks such as assigning tickets to specialists, escalating issues, and sending notifications. This frees up employee time to tackle more complex tasks, thereby boosting overall productivity.
- Improve Communication: Confluence serves as our knowledge repository, where we document all essential information about systems and services, providing employees with round-the-clock access to up-to-date instructions, databases, and other materials.
- Ensure Transparency: Atlassian products allow us to track key performance indicators (KPIs) such as issue resolution time, customer satisfaction levels, and so forth. This empowers us to analyze performance, identify areas for improvement, and continuously refine support processes.
In essence, employing Atlassian products has become our key to successfully implementing system and service support processes. Thanks to this tool, we provide our clients with high-quality, timely assistance, fostering trust in our company.
I was particularly intrigued by the opportunity to join beta testing of new AI-powered features within the Atlassian product ecosystem. I wanted to explore the AI capabilities that the company is integrating into Jira Service Management and Confluence products.
Assistance in Writing and Editing Text
Initially, developers integrated GPT models into interfaces designed for text writing and editing.
GPT stands for “Generative Pre-trained Transformer.” It is an artificial intelligence model that utilizes the transformer architecture, which is a type of neural network capable of analyzing and generating text. The main feature of GPT is that it is pre-trained on large volumes of text data, learning the relationships between words and predicting the next word in the text. Then, after the pre-training stage, the model can be fine-tuned on more specific data or used for text generation on demand.
GPT has numerous applications, including text generation, question answering, automatic summarization, machine translation, sentiment analysis, and more. Versions of GPT with various sizes and parameters have been released by OpenAI over time, each of which typically exhibits improved performance and text generation quality.
For example, it’s possible to change the tone of the written text to be more professional, empathetic, casual, neutral, or educational. The text indeed changes when applying this action, and perhaps this capability will help a support employee adapt their communication style to fit a specific company or client.
However, currently, this will have to be done manually.
Attempt of Text Tone Change
Another proposed capability for working with text is summarizing text. This skill of GPT models works quite well and raises no questions.
The system can also help you correct grammatical and lexical errors in the text while writing.
Additionally, you can use GPT’s ability to generate action items based on the written text.
These are like predefined prompts for GPT built into the system.
Making Action Items
A separate point worth highlighting is the platform’s capability called “Brainstorm”. Here, the system offers us a bit more freedom. We can independently use prompts to generate texts while working on a ticket. Using this feature is not much different from working with the prompt in the ChatGPT dialogue box.
These assistants are integrated into all text-based work dialogue boxes.
Brainstorm mode
Assistance with Creating a Request
Let’s continue our research. Within the Jira Service Management functionality, users are able to create requests through an external portal. In cases where your project has an extensive classification of problem types to choose from, this can be problematic for users. However, thanks to new functionality based on artificial intelligence, the portal now allows users to formulate requests in natural language, describing the essence of the problem. The system will automatically limit the selection of problem types related to this request and suggest relevant articles in the knowledge base related to the issue.
AI Elements at Customer Portal
Although this feature may not be particularly exciting, it is a useful tool for speeding up access to relevant information during interactions with users.
Virtual Assistants
Here we come to the most interesting functionality of the Atlassian Intelligent platform – the ability to launch virtual assistants for your support projects. Let’s see how the company managed to implement this idea in their product for technical support – “Jira Service Management.”
To interact with virtual assistants, you can use channels on the Slack platform. Atlassian has developed and published its own application in Slack called “Atlassian Assist”. To get started, we set up a connection between our project in Jira and a channel in Slack. The application is then invited to this channel to interact with users. Administrators can add users to the channel, or users can join the channel themselves if they want to receive assistance with our project.
Virtual Assistant in Slack
The main advantage of the assistant is its ability to access project information and generate responses based on it. I spent some time understanding what information the assistant uses. It turned out to be quite simple. It constructs its response and provides links to materials posted in Confluence space, which are linked to our project. The assistant does not analyze the correspondence conducted within the tickets and does not base its response on this data.
Conversation with AI bot
Thus, the assistant integrates well into the process of providing responses and recommendations to users on topics already covered in the form of articles in the project’s knowledge base. It appears somewhat unsure when the user does not formulate a query that entirely fits a ready-made recipe in the knowledge base. However, even in such a situation, our communication workflow with the assistant continues – it includes the ability to create a ticket in Jira directly from the dialogue in Slack and proceed with the analysis of the issue until its resolution. Perhaps, on the other hand, a live support team member will handle your issue. Yes, indeed, receiving service from a human will become a privilege.
What’s Under The Hood Of The Intellectual Platform
The new intelligent platform is only available in a cloud version of its products. Atlassian builds its cloud services on the software foundation of Sub-processors – infrastructure providers and other specialized services, such as AWS, Azure, Databricks, Mailchimp, and many others. OpenAI, with its models, serves as a service provider for GPT.
In its technical notes, Atlassian emphasizes that it only uses client data during interactions with the OpenAI platform and does not retrain models on your data. It can be assumed that the service employs a Retrieval-Augmented Generation scheme for interacting with LLMs. This means you can programmatically “inject” additional information from external sources into the query and feed it all as input to the language model. In other words, you add additional information to the context of the language model query, based on which the language model can provide the user with a more comprehensive and accurate answer. This method is effective when your own knowledge sources are constantly growing, and you don’t want to continually spend resources on retraining models.
An obvious question arises: how does Atlassian use your own data? The company itself states that all information flows into its Trust Center. This node has the following security properties: all connections to its sub-processors are encrypted and controlled by the platform; data within the context of one client is not accessible to other clients; it is claimed that the partner, OpenAI, also does not store client input and output from Atlassian and does not train its models on this data.
Conclusion
ChatGPT was launched by OpenAI just a year and a half ago. But it seems like we’ve been living with the capabilities of GPT models for an eternity. Therefore, the text AI capabilities provided by the Atlassian Intelligence platform no longer cause a “wow” effect today. This has become the norm for all modern products.
However, the implementation of a virtual assistant, as envisioned by Atlassian, seemed quite interesting to me. They included user interaction with assistants in Slack channels, rather than confining it within the confines of portal interfaces and web interfaces of their products. We use Slack as our corporate messenger within the company, and it is possible that the use of such assistants will provide our employees and customers with a quick and convenient way to get recommendations for solving pressing problems. Perhaps integrating GPT models into technical support processes will make it possible to save on technical support human resources, primarily on the ServiceDesk team.
However, the company is cautious and does not provide platform administrators with tools for fine-tuning interaction with GPT models. Perhaps there are concerns that the “freedom” in responses that GPT models may allow could lead to unpleasant consequences.
At First Line Software, we are ready to guide you through the integration of AI to enhance your productivity, particularly within our Application Support Services. Let us help you harness the power of AI to transform your business operations effectively and efficiently. Contact us today to learn more about how we can boost your Application Support Services with cutting-edge AI solutions.
Anton Samsonkin
Senior Service Engineer
For over 20 years, he has successfully been involved in building efficient support processes and teams across various technological sectors, including banking, telecommunications, and transportation. His extensive experience covers a wide range of tasks related to ensuring the uninterrupted operation of critical systems and infrastructure in high-demand and dynamic environments.
Currently, his focus is on implementing innovations, including in the field of artificial intelligence, into support and maintenance processes for applications. He shares optimism regarding the use of advanced technologies and methods to achieve outstanding results in application support and customer service.