The unique insight we bring is the recognition that each employee has unique tasks they perform and providing customized training for them is key to using AI effectively. We also recognize that using AI incorrectly results in “workslop”.
Workslop occurs when Generative AI is used to generate results quickly. It happens when someone who just uses the output of their prompts to sends their work off to someone else to consume. These people can generate sloppy work at remarkable rates and volume.
I learned long ago that every time you send work to someone else (a consumer), the person can only produce work that is as good as what you provide. When you ask if what you are providing is complete and accurate, you will find out whether your work products are easily consumable. It’s often the case that your consumers just “suck it up” and fix work they receive to prevent the hassle of waiting for something to be reworked and never tell you. Because Generative AI is very good at hiding the fact that the answer is incomplete, wrong or a fantasy, reviewing its “answers” is critical.
Another insight is the handicap employees experience when limited to tools IT provides. The primary goal of IT when deploying any technology is to ensure it complies with company policies. Security policies are paramount. Generative AI has made data obtained from user publicly available many times. This forces IT to use secure cloud services that are “locked down” to comply with policies developed prior to the prevalence of today’s cloud services. These locked down services are deployed and employees are expected to use them. However, being locked down prevents employees from providing the data Generative AI needs to perform its work. Furthermore, the tasks are employee specific and IT is a general purpose service organization with limited resources. They cannot cater to individuals. The resulting tools are often configured to limited to the point they are basically useless or worse produce “workslop”. Untrained employees are not equipped to deal with the complexities of rectify this situation. We are.
Generative AI is a remarkable technology. For the most part it replaces how you interact with a computer. Instead of clicking on buttons or entering commands, you chat with the computer. The computer has a huge database that has been trained with “knowledge” that is required to answer questions you pose. It can ask you to clarify requests as well. This basically causes the chat to navigate a huge “neural network” with billions of nodes. If your chat results in a path that contains the data it was trained on and that data is what you are looking for, you will succeed. Unfortunately, to find this path requires a lot of detailed and precise instructions. Few people have the patience or training to generate the volume of text required for this essential interaction with the “AI”. If you have a good Large Language Model (LLM) it can reason, basically substitute knowledge in templates that contain placeholders. The more refined the LLM, the better it can perform this task. These refinements are expensive. IT is unlikely able to provide refined LLMs to their user base. This means employees don’t get the tools they need to perform at the level management expects from an AI enabled employee.
The solution to this problem is finding a service that has trained an LLM in the area of expertise you need help with. These services have monthly fees and are often “credit based”. The more you use them, the higher the cost. Given that a company usually has employees skilled in dozens of different disciplines, having dozens of different services with monthly costs is considered prohibitive. This ignores the fact that fewer highly productive employees enabled by AI frees up funds that can be used to pay for these services.
An alternative to paying for a monthly service is to for companies to create their own GPT assistants. ChatGPT Enterprise and their competitors provide tools to generate these assistants. At a bare minimum, these tools have provisions for less capable prompt libraries. Constructing your own GPT or shared prompts requires skill that we can provide assistance to develop.