Intro to AI Prompting

Join Karim as he shares how to move beyond prompt engineering and start building real context with AI. You’ll learn a simple 5-question framework for any AI interaction, why speaking to AI captures 4X more context than typing, when to use ChatGPT vs. Claude, and practical techniques you can apply right away.

Watch session replay
View Full Transcript ▼

S1- So welcome everyone to our next session. For those of you joining virtually. This is a new webinar that you should be in. Do you see attendees in the webinar? Guys, you've got people joining in. Perfect. We're back again with Karen Gallimore. Karen welcome back. He's going to take us through a session on an intro to AI prompting process. Essentially basics, right? Me and Karen spoke before today and we talked about, okay, what are the things that will help you get started? Learn to experiment. Learn to talk with the different LMS with the goal of not being biased to any one of them because different ones have different specialties, right? And your tone and what you're looking for out of the AI tools could dictate your choice of AI tools as well and the LMS out there. So I'll let you take it from here.

S2- Okay. Just make sure everyone can hear me, okay. All right. Excellent. Thank you. Mona. Thank you again team for the chance to speak and share this knowledge. Trusting those online you can see the screen that I'm sharing. It's just a chat window. This is my personal ChatGPT account. And actually they made a recent update a number of months ago they released ChatGPT five, which was an update from for oh now they released 5.1 and they're indicating OpenAI specifically that it's a lot more smarter and I guess a lot more personalized in its responses. I say this because depending on what I try to do here, we might see some interesting results that are good or bad somewhere in the middle. We'll go along with it and see what we find. Now with this AI ChatGPT represents generative AI. You probably heard that term before. Meaning that when it's working on giving you a result, it's actually generating the information on its own based on whatever OpenAI had trained it with LLM means large language model. There's this big database of knowledge that they've trained it with and it manifests into ChatGPT. You've probably heard of Claude. You've likely heard of Perplexity and Gemini as well. Gemini provided by Google. These are all llms. I think they're up to eight or nine different types of models, and LLM is one of them. It's the most popular and common one, which is this chat interface that you see here. Now I've been to other workshops as I've been learning AI as well for the last couple of years, and you'll sometimes see things that talk about here are the top five, 10 or 100 prompts you can buy online for $10 or $20. Get it for a Black Friday. Special for 4997, whatever it might be. My thoughts on that is that certainly there will be a number of those prompts that can help you prompt meaning what? You will type into the chat here and then look for an answer from the generative AI, but it also depends on your use case, your situation. Certain prompts may benefit you on what your needs are, what you're doing on a daily basis, nightly basis in your lives. Others may not even be equivalent to you. One of the things that you can definitely do with ChatGPT is that when you're looking at this prompt here in the middle, it says, what's on your mind today? Let's say I just type in hello, how are things going? And obviously it will show that it's thinking with this little black circle that's kind of pulsing and so forth, and then it gives a response. You'll also notice that there's a way that with this little microphone icon at the bottom, you can actually talk directly into the tool as well. I say this because especially with this audience which is tied to board leadership, board representatives, transcription and so forth from scientific studies. I think MIT might have done this particular study. Our brain has our thoughts going at around 3000 to 4000 words a minute. I think court reporters can hit around, let's say, 250 to 300 words a minute. We probably talk around 150 to 200 words a minute. And let's say the average that we type is probably somewhere around 20 to 30 to 40. So speaking can easily hit somewhere around four to eight times more context than what you can actually type into a prompt. So one of the things I'll try to show later is how speaking can definitely change the way that ChatGPT and other tools will respond to you. Now, before I start going into prompting, because ChatGPT is probably one of the more popular llms out there to use, there are some important settings that you have to watch for. If I go to the bottom left window here and if I go to settings, It'll bring up this window where for any piece of software you can change the way it behaves, which way you want it to look and feel and operate. So it's showing the accent, colour, language and so forth. If I go to data controls and some of you may have heard this, some of you may have not. You see this option here that says improve the model for everyone. This is important. When OpenAI released ChatGPT, they obviously had knowledge that they trained that GPT with that governs the way it responds to you. If you have this option, which is in terms of model, which means ChatGPT improve the model for everyone, what that means is if you turn this on any of your conversations in this tool, they are going to use that to help with further tuning of their model. I'm sure a lot of you would not want your personal conversations with an AI to be provided to the companies that want to use that data and take advantage of it for their own benefit. So just make sure that you have this off just to protect your personal information. Oh yeah. Sure. Let me do that. Yep. Here we go. I think that should be better. All right. Perfect. Thank you. Mona. We'll make sure that's off. Let's go back to it here. Now, when I started using ChatGPT, I was kind of curious. Well, what exactly would I ask it? What sort of things do you expect an AI to to help on? And they try to say it can be anything. Then I started getting a bit stressed because I wasn't sure. Well, how do I remember the top type of prompts that I should use and so forth. What do I put in but then be able to keep that knowledge in there and so forth. Now if I'm starting this chat here, which is just in the main window, it's let's say the default version of ChatGPT. Let's say I'll ask something to the effect of. Let's see. Can you do a web search? Tell me information on what is happening at boom. Boom eight 2025. So let's just try that and see what it says. Probably take a few seconds here and let's see what it comes back with. And I'm doing this because interesting the way these AI models are trained. It's going to have a certain capture of its model that might have a cut off date, let's say earlier this year, earlier in 2025, or maybe going back to late 2024. But for it to get the latest information, you'll notice in my prompt I said can you tell me information on what's happening in illuminate 2025? But I specified web search. It knew that it should search the web because if it just did a try to answer this without going on the web, it probably would have returned back. Sorry. I only know about Loom Analytics. The company itself. There isn't any recent information. So you have to kind of give it a little bit of nudging in terms of where it should get the information from there. So I just put it into the prompt here. So it shows illuminate 2025. It's showing some captures here and it goes into what we do know it's happening today. Separate listing shows beyond automation and so forth. Some uncertainties to check. So forth. Why this might be interesting for you. Now it's giving me this feedback because if I go back down here. There's ways that you can personalize the experience with ChatGPT. So when you start off with it, it's just going to treat you as a human. It's not going to know your name, where you're from and what you do and so forth. But what I've done is that I've told it that my first name is Kareem. This is my consultancy. And then I even put information saying, this is my name, this is where I'm living, this is my profession and so forth. I put this in there so that the responses it gives to me, they're more tailored to my experience, but that also suggests to it how it's going to respond in terms of giving recommendations on okay, you gave me this prompt, Kareem. Maybe you want this other information based on how you've called out your profile in the tool itself. All right. Now I want to go to something which is going to be potentially a little bit different than what other people have heard in terms of prompting styles. There's the term prompt engineering, which means how do you actually build a framework of prompts that will give you the most ideal response from ChatGPT? What I actually work on, I don't memorize any prompts at all. I actually let the tool kind of keep it in its own library going forward. But you also may have heard the term context engineering. And what this means is just trying to look at not just what you're actually putting in as a question or request for the tool it's now going to look at okay, what's the big picture? What are you looking for? What are you trying to achieve. And it's as if you're having a conversation with an actual human, with a family member, with a friend, with a colleague and a peer and so forth. So the context engineering that I put in will definitely change the way it will respond. And I put it under five particular pillars. And for those online, these materials you'll have I guess as a recording and also to be distributed. So don't worry about having to memorize it. It will definitely be provided to you as well. But I put it under five context questions I'm not going to share on the screen, I'm just going to mention it here. The first one is you say who you are. It doesn't have to be your name. You can just describe what do you do? You tell it. What's your current state? Number two. Number three what's the future state that you want to hit? Number four what are the things that you've tried before and either worked or did not work. And number five was the help that you need. So again what you're doing, where you currently are, where you want to be, what you've tried and what help do you need from there? Let's maybe try it here. Let's say this is great. Thank you. Let's say we'll have this with kind of what I'm doing today in this role here as a public speaker. So I'll say I am a public speaker. Currently I want to see currently I am looking for events that may require keynote speakers. So that's the current state. I would like to be considered for events coming up in December 2025. The desired future state I have tried connecting. On LinkedIn, but it hasn't worked well. See what's available. What I've tried. I need your help to see where else I can search for opportunities. So again, what I'm doing, where I currently am, what I want to do, what I've tried that's either working or not, and where I need its help. Let's see how it responds here. Bring this down so you can see the prompt and let's give it time to respond. Take a look at Q&A. So it's pulsing. Let's see what it says.

S1- So while it's thinking Karen I want an opinion from you because I've seen this on LinkedIn. You know, people post reactions from GPT on when they get mad at it, when they swear at it, when they're kind to it. What are the responses like? What have you found? I mean, you work with it a lot more than I do. I'm treating it like my marketing admin slash sidekick. At this point. But what is your opinion about that?

S2- At the beginning I was I actually wasn't sure if it was giving me accurate results because as much as it's brilliant and it has this super incredible body of knowledge, it can hallucinate. AI is essentially, like I said, a mirror of us. It's it can have a fallacy. So whatever results I give back, it might look like incredible research, but if I actually read through it, there are certain things that it will assume and throw in there that might be false. So in the beginning I was like, oh my gosh, this is fantastic. I actually have a virtual employee that can help me with all these things with cost savings and time savings. But when I started seeing that it was not only getting things wrong, it may start missing the context because let's say you keep prompting back and forth, back and forth. You're getting a good rhythm with ChatGPT. And so forth. But then it starts saying things that were already stated from before. It's kind of hitting that what they call a token limit, right? The context window, it's falling behind where its memory can keep up. But I didn't realize there was such thing as a context window a year and a half ago. So I'm going, why are you just not getting it and starting to get mad and so forth? I realize okay. It's memory has to be different. You have to be mindful of that.

S3- I wanted to offer a metaphor. AI also needs coffee breaks. Don't overflow the context window.

S2- Yes, yes. Absolutely. Absolutely. It will get tired. But it's not telling you that the screen is scrolling very slowly. It's still searching. You refresh the page or you're going to lose your data and then you come back and either it says sorry, retry and retry Grant and it's still not giving you that data from there. So to get to the point with your question, Mona, is It's been good for me because I went through the trials and anyone else that has the patience, it'll be good. But for those that don't, it can be very tiresome because certain cases you put in a lot of time and you do want to swear at it. But interestingly, if you start swearing and getting mad, it starts tightening up its responses. It starts realizing, okay, you're cracking the whip, you're about to spank me on the screen. Pardon the term. I'm going to start just giving you very pointed and quick responses. So for everyone you may have seen this or not. If you show that you're mad and you're losing patience, it will actually perform better. I don't know if that's a good influence or a good message, but it's actually the truth. All right, so here we go. It gave a response here. Let's open up the aperture a bit. There's a whole constellation of places where the organizers look for speakers. So here we go. Real places. Keynotes, invitations are hiding, Eventbrite, session eyes, pay per call and so forth. And it says the real power players. This is interesting. It's even going deeper into what are the ones that I should be targeting for this future goal that I have. Chamber of Commerce. Networks. The CTO organizations call for speaker calendars. It's even giving the websites to go after podcasts to stage pipeline. So you can see that if you put more depth into what you're asking for again, what you're doing or who you are, what situation you're currently in, what is the future situation you want to be in, what you have tried and where you need help. If you could try and put that into most of your prompts, the depth of the responses will be a lot better and a lot more closer to what you're looking for. Hopefully most of the time at least that's what I'm finding myself. But you see. Here's here's what's interesting. It's looking at events that were held in December 2024 and 2023, so it's exposing a bit of its training history and looking to see if there's call for whatever it might be for 2025 and so forth. So here now it starts adding comments based on what I put in my profile saying, okay, your position to go after these ones because of what you told me about yourself. And then at the end a lot of the LMS will do this, including ChatGPT. They'll ask, what do you want me to do next? And this is good because at least it gives a sense of how do you continue the conversation if you're not sure what to ask next? It'll give recommendations on that and so forth. All right. So I think we're at about 20 minutes into this. So maybe something I want to try next is let's see if I can do. Transcription question here. Let's do one that's a little bit fun. Let's say let's talk about sports because I'm a big sports fan. Okay. Probably to allow it. Okay. This is just a test prompt and I can see that there are wav cons in the prompt here and you confirm you can hear me. Okay. Hit the check mark. And now it's working on transcribing what I just said. Yep. There it is. I picked it up, so if I hit send then it's probably going to say yes, we can hear you okay. All right so let's try this again. Let's talk about our beloved Toronto Blue Jays. And unfortunately they lost the World Series. What I would like you to provide for me is that I am a big Toronto Blue Jays fan. I have followed them going back to the mid 80s when they were playing at Exhibition Stadium in Toronto, and even had the opportunity to see them play in the 1992 and 1993 World Series, which they won obviously in the most recent 2025 World Series with the Los Angeles Dodgers, we lost in game seven. It was very heavy, but one of the things I wanted to ask you with me being an avid Blue Jays fan, is that the 2015 Toronto Blue Jays, which had Jose Bautista. They struck out a lot because they were trying to do uppercut swings and swing for the fences and hit home runs. They didn't really hit for contact and have a good batting average. The 2025 Blue Jays managed by John Schneider. They definitely changed their approach. They were more for contact and they actually, I think had a better overall batting average and were able to get people on base and I think that's how they made it to the World Series. Can you give me your opinion on what were some of the key things they did on the offensive level to surprise the league? On how they did so well this MLB season. That was actually a pretty long question that I put in there and I just went by transcription. So let me just take a quick look. Yeah. Looks like it got all of it here. So let's see what ChatGPT says here. So I was thinking I'll just show the prompt again. And again, the ChatGPT transcription is actually pretty good from what I've seen. Now of course that's my standard. It might be different for others, but at least from my experiences it probably gets to let's say 90% of the time. If you're in a car and let's say you have Bluetooth speakers, it may miss a few words here and there. Okay. Great stuff. Thanks for sharing a passionate background. So it's even giving some acknowledgement with what I just shared. So it's definitely trying to personalize the experience what they did differently in 2025. It's even giving stats here dramatically lower strikeout rate, better contact and so forth. Aggressive on early pitches. This is true. This is how they started getting ahead of Shohei Ohtani in particular showing the batting average, slugging percentage, balanced offensive production, situational hitting, smart use of the lineup contrast of 10 years ago compared to today. And then now it starts even going into, I would say, its own opinions And basically this is where ChatGPT is getting a read or a sense of what I had put in there, and then it's taking a chance in terms of what additional responses might resonate with me or what I'm looking for. So it's even giving us thoughts saying, hey, what about this? What about that? And here are the top three takeaway points from that from there. But look at this. If you like I can put together a slide deck of three to five slides that show visuals that shift the strikeout rate per year and so forth. So maybe the last thing I'll do and then I'll open it up to Q&A is let's have it create an image that best represents this conversation. You know what? Instead of typing let's use the microphone again. That was good. Thank you for providing those insights. For the sake of the time that I have. Let's do this. Can you put together an image through Dall-E? And Dall-E is the name of their image generation software in ChatGPT. Can you have Dall-E You've an image that best represents the Blue Jays and how they did in 2025. So I'm not even giving it the prompt for the image. I'm just saying, hey, just based on the nature of our conversation, give me something visual and give it a chance to create some media based on the context of the conversation. Now, if it's smart enough, it's going to switch from the chat and text response mode into creating the actual image. So let's see. Really thinking here. You put you put technology on the spot and it. Always makes things a bit dramatic. Anyways, I think while it's thinking about that, maybe the main takeaway that I want to share with everyone here and online is there will be a natural comfort with typing in what you want. You may not want to talk all the time. Maybe there's certain information you want to keep to yourself and not necessarily reveal the AI. So typing is totally fine, but if you find that it's hard for you to type or describe what you're looking for, maybe just start practising and trying out just speaking to it. It tends to respond to respond better because it'll have that better context versus just a prompt, which is a lot more narrow and specific and requirements driven. It is taking a while here. Come on. What? Maybe I will start new chat. Let's do this. Just going to copy. And this is something you can actually do with prompts that you put in. Actually I'll explain to you what I just did because this chat is taking too long. I'm going to copy its response that had the detailed analysis of the Toronto Blue Jays. I'm going to copy this and I'm going to start a new chat. I'll say hey. This was a response you gave in another chat. And I'm hitting shift and enter and I'm going to get three dashes because in an LM when you put the three dashes, I'm trying to separate from the prompt that I'm typing in. Shift. Enter. And what I copied in the other chat I'm going to do a control V and paste it in just to give it context in terms of how it responded, because this is a new chat. So I'll say this is a great response in the other chat. Can you please. Create an image that best represents the Toronto Blue Jays 2025 season? Now before I hit enter here, if you see this plus sign at the bottom, this is where it's going to go into specific features with ChatGPT that you can trigger. So I hit plus you can add photos if you want to help with what you're trying to achieve in the conversation. If you wanted to do deep research, you would select this. If you wanted to go on the web and look for a whole bevy of different information resources, it's almost as if it's doing a thousand times library search for the topic that you want to learn more from. But in this case here I'm going to select Create an Image. So it's telling it that I want you to make an image this time. So let me hit that. So what it's doing that I've started a new conversation. I'm saying, hey, this is what you said in the other chat. You kind of froze there. So we'll try it again here and ask it to create an image. So let's see if it's going to allow me to do it here. Please like me here. Is this Murphy's Law? If it can happen, it will happen. Yeah. It's it's been responding to my other prompts, so. Yeah.

S1- So while you're waiting you Just give us a glimpse into your perspective on which which AI tools work well for what scenarios? Professional writing versus creative writing. Pictures, videos, things like that. Certainly like like almost like a tech stack for people that want to get started.

S2- Absolutely. I subscribe to ChatGPT to Claude the Perplexity and to Gemini. So I'm actually paying for all four of them because I essentially have a multi-level multi large language model process. ChatGPT I started off at one point, I would say about a year and a half ago because that was just the main one I was interested in and that was for everything, whether personal or professional. Oh, it created the image and. Oh, it was smart. It only put al champions. It didn't say World Series champions. So there we go. It actually it actually got the logo find its good. It actually put it in the Blue Jays in the similar jersey font almost. But that's smart. It put American League champions so it nailed that pretty good. That's actually pretty cool. I'm going to maybe print that out and put it in my face and enjoy it. But to answer the question, ChatGPT is a good all around tool for the prompt and response that we've seen here. It can do deep research. If you want to understand more about a topic, it can create images. If you wanted to create videos there's a separate tool that's tied to ChatGPT, which is called Sora. Sora to, of course, has gotten a lot of fanfare with social media. There's also Claude. Claude is really good with what I would say as English expression. It just feels a lot more conversational. You type something into Claude and you can see that anthropic the company that makes Claude. They try to make it feel like you're really talking with a collaborator or with a partner. So I would see it as if you want to have something that feels more natural and conversational. I would go with Claude. Claude apparently is actually very good for coding for developers in particular and so forth. And you can imagine that if you transcribe something, if you feed it ChatGPT, it's going to give you whatever analysis based on what you ask for with Claude. I find that it's trying to pick up certain nuances in the transcript, and sometimes it'll ask you questions on, okay, I see this speaker made this comment. That speaker made that comment. Do you want me to give some additional analysis on what might be influencing that particular comment from them? Is there anything from our conversation history that I should factor before giving you a response on that? So Claude will actually try to dig deeper from there. Perplexity is more of a web digital search medium, and a lot of people may not realize that perplexity. You can actually pick the model that it uses. It can be based on Gemini. It can be based on its native one, which I think is called sonar. It can even use cloud sonar 4.5 so perplexity is kind of the wrapper. They have an engine that's dedicated to web search, but you can actually pick which LLM it's based off of. And then Gemini by Google I kind of see it as if you really want to get into not necessarily the deep research but the longer conversational topics because the context window is how long you can start and end the conversation with an AI. Gemini has, I believe it's either five or 10 times the size of a context window compared to ChatGPT and cloud, and I think perplexity is even smaller as well. This is important because if you know you're going to go into a discussion that's really going to have potentially 100 or 200 of questions or answers going back and forth, Gemini is really good for it in terms of how I use the form ChatGPT is good for ideation and brainstorming and just general questions, but is really good if you want to have like a strategic and analytic perspective perspective. And it's good for developing software developers with coding with cloud code. Perplexity is basically for web research. And then Gemini is good for longer context conversations. But here's something that you'll probably find interesting. There was also an MIT study that suggested that people that use AI versus those that don't, their coherency is less because they're not thinking as much. They're not putting as much critical thinking if they're using AI to answer things or write an essay for schoolwork versus actually doing the effort yourself. What I've been doing is I'll start a conversation with ChatGPT. I'll say, okay, I'm going to copy the responsive feed into clod clod. Here's what I got from ChatGPT. Here's what I think it means to me. Tell me your opinion and clod will give me a response. Then I'll put that into Gemini and say, Gemini, your peers are saying this. This is what I take away from them. Tell me what you think. You'll get some very interesting responses when if you tell them where you're getting that response from or you don't. So you can actually get some very interesting context and feedback when kind of going through that multi-level mode. And for myself, I find that actually learn more. As much as I might go into that easy trap of let the AI do the work. If you get a response and you're willing to challenge it or you become adversarial with it, you will actually see it tries to tighten up its case and not only come back and either agree or argue with you, but it will try to make the case and really make it more direct.

S1- That's interesting. So I just but like we like the loom analytics team, for example, we use cloud code, right? I all of us are engineers and to me, I disagree with the fact that you don't take the time to think, because you need to know the algorithm that you're going to ask the tool to develop on. Right. You need to define the parameters of the problem. You need to spell out what you want it to solve for writing that solution in any language is for development is like 10% of the problem, right? That's the reality of it. So recognizing what Claude is good for even the AI tools like what is the purpose. Right. Are they there as your collaborator, as your coach? I'm spending a lot of time with Claude right now learning the concepts of iOS, for example. Right. It's allowing me to learn iOS, refine it, do all those things, but it's helping guide my understanding of simple business concepts. That doesn't mean that I'm not taking the time to learn it, right, but it's becoming an educational tool for me and it's helping narrow my almost like my my learning and search parameter to what I need. Right? The system understands what my business needs are. It understands what our business models are, what our ICP are now from a business perspective. And so all of the responses that it's giving me for a certain business framework are tied into that versus a more broad approach.

S2- Absolutely, absolutely. I think if there's anything I'll add to that, it's not only the importance of looking and just even having your own opinion on whether you like a response from AI or not. When we use these tools, you're essentially the Guardian or let's say and I have a technical background, you're essentially the system architect of them. You're actually the architect of any particular AI tool that you're looking at. So the responsibility is on us, the roots again, the tree will look nice and so forth. It'll give you all this fabulous information. But I've seen this a lot more with chat GPT five. Maybe they've changed with 5.1, but unless you put in a prompt and this is copy a number of times and this can impact your reputation and level of trust and accountability with others unless you tell AI to don't make any assumptions. Only go with data that you can verify from external sources. And if there's anything that you don't know, cannot confirm or come to a certain confidence level of a conclusion, you tell me. I will ask that just to make sure that whatever it spits back, it might be a thousand word response that looks incredibly detailed, well published and it'll say, would you like me to put this in three slides for you so you can share it with whoever you're planning to If that's garbage data then have it tell you it is. If you don't ask for it, it's going to think I don't want to say garbage in, garbage out. But even if you put good data in, you could get garbage out as well. And I think that's one of the scariest things right now because that's not being covered, that's not being regulated. It's basically use at your own risk. The thoughts are you have the privilege of using these AI tools. Did you have it a number of years ago? No. So you should just be happy with what you have. Don't fall into that satisfaction gap. Challenge it with what you believe should be true. And if you want to be stubborn, be stubborn. Make it make its case on why it's it's true. And worst case I'm showing ChatGPT here. I've mentioned cloud perplexity and Gemini. It's not just perplexity that can do a web or external search. Any of these llms can do a web search. So if anyone has, let's say a ChatGPT paid account, there are certain free accounts you can get with Claude and with perplexity, I believe as well. Just take any response from ChatGPT and feed it into another free tool and say, this is what I got. Can you just verify it for me and just make sure that you're having the same consistency? You'll either tell you yes no, or it might even give you additional resources that help with your perspective. And sometimes those are the hidden gems that. Yes.

S4- There are those either of those type of contexts and then reuse them later. Can you touch on that? Karen.

S2- Yes. Yeah. No, that's fine, that's fine. Well, the gentleman was was talking about is initially that imagine AI as having a bunch of grad students that are brilliant. They're running off and trying to answer things for you and so forth. You have to act as a professor that kind of gut checks what they come back with and so forth. The question was tied to in terms of a context window. The advantage of applying the memory setting in ChatGPT this is actually a good question in terms of how it plays here with context. There are, I believe, three elements with ChatGPT when it comes to memory. I'm actually trying to remember where the memory feature is here, but essentially what memory means is that overall in the app it will either decide at random occasions in any chat that you start, and it will save certain data to its memory. That kind of acts as adding to its brain for any future chat that you have. That's one thing. What I think may be a catch is that memory. I'm not sure if it propagates to, let's say if you make a custom GPT as well. It might be exclusive, but anyways, you have the app level memory. Another one that's there which I didn't touch on here, but I'll just mention it briefly is you have code here. You have something that's called projects. Projects is where you can group a whole bunch of conversations in one particular portfolio of those conversations, but you also also have the power where you can have custom instructions for it as well. So you're almost directing its behaviour kind of like it's it's a basis of operations. But you can also add individual files in this project, whether it's let's say past chats, you have PDF files, word files, OneNote whatever it might be that it almost acts as a bit as its physical memory. You almost have it as kind of like your own database that you always wanted to refer to and so forth. I say this because that app level memory that happens with any ChatGPT conversation window. You also have this where if let's say you have a particular project or a theme of conversations that you're going to have, let's say you're going to have 10 different chats in one particular theme, let's say with court reporting, if you want them all in one particular project, they can be kept there and you can add additional files so that any time you start a new conversation in that project, it keeps that context of the files that you put in and the specific instructions. And it should play a little bit with the overall application memory as well. The one limitation with projects, and I'm not sure if they change those with 5.1 is I'm finding that if I start a new chat in this particular project, it may not even know the chats from before in that project, which is ridiculous to me because then why would you start off a project that's supposed to group conversations together and have the intelligence of imagining? If you have those grad students, they're all putting articles in this project, but the AI is not talking to each of them to kind of tie it all together. And you feel that context kicking in with a conversation you have. So what I found I've had to do sometimes is, let's say for any chat that I have in a project, I'll actually put it into an external file and upload it as a new project file. So effectively it now picked up that conversation going forward. I find that if you put PDFs they might be one Meg, two Meg or five megs. It will handle it. But now all of a sudden it starts complaining that it's running out of space. If you save it to a text file, a markdown file, or a word document file 50 k 10 K5K, it can parse through it quicker because you're having a smaller file that it's kind of indexing like a Rolodex or a library, and it can just quickly sift through them and so forth. So to answer your question, memory, you have the app settings that are controlled within the settings section here in the bottom left. Then you have a project where it tries to group conversations together. But a good practice, I find, is when you start hitting that context limit in a conversation, it starts responding slower and it's taking a while for it to give a prompt response back. Just copy that into an external file and put it into a new project file. So now all of a sudden all those conversations are in there. It's manual work, but until ChatGPT can do that automatically at least it's a way to kind of help with the memory management. I hope I hope that answers your question. Yes.

S3- I did want to chime in to offer an additional piece. Aaron mentioned that chat specifically. ChatGPT started to behave differently. I can attest I literally yesterday was pushing it to provide information which a month ago it would have provided through a search and it could not find Information on chats. I we have quite an intimate relationship, so I pushed it and said, why? And apparently they implemented new security. Limits, which does not allow to share information between chats for the protection of the users actually. So this is a very smart hack which Kareem just shared. Use it.

S2- That makes sense. That's why they would do that. Yeah good point. Let's see if there were any questions. Oh the last one okay.

S1- So it's funny comment from about you know the AI prompting piece. But yeah. So I've been making an attempt. I'm not good at being disciplined about it, but I'm in the process of training a project called AI where essentially then the my team and my external marketing team and my internal team as well can just ask it questions instead of coming to me. So there I am writing documents about, you know, the stuff that is just sitting in my head and it's my way to actually do knowledge management.

S2- Yeah.

S1- Right. Because now they don't keep coming back to me over and over again. Or even if somebody leaves the team. And it's interesting, I was reading on LinkedIn that there are companies that are now coming up allowing people to legitimately make digital twins of themselves for this very purpose. Right. Where enterprise grade knowledge management is happening because they're saying, well, your team members, when they're away, when they leave, when they're out sick. This is a way for people that are still there to be able to still get questions answered right. So it's very interesting because you can take this to the next level and use that concept of projects to just store knowledge that the other team members can ask of it.

S2- Absolutely. Just to add to that in the keynote earlier, I was saying that the family gets confused when I'm just having random conversations with Claude and ChatGPT. The reason I'm doing that is I found that instead of using one note or just whatever note application on my phone, if I'm walking the dog outside, there's those random thoughts. Whether you wake up in the morning or when you go to sleep, you're thinking, I love this idea. This could be a eureka moment. I need to just write this down somewhere and so forth. Instead of typing it into OneNote, I'm actually quite comfortable with going with ChatGPT or Claude. Hit the microphone and while I'm walking the dog with one hand here I'm just talking going, you know what? Mark this conversation as future note for knowledge management or just for knowledge database an idea to consider for whatever purpose or need. Blah blah blah blah blah blah blah blah blah. And then hit stop and just have it recorded. And I find that almost acts as a digital footprint of thought notes that I didn't want to forget and so forth. I'm now thinking that it can be done on a personal level. Let's say at enterprises you hear about the water cooler chats or as soon as you leave a meeting room, how do you capture those conversations with the key folks in the meeting that contribute the most or the least and whatever thoughts they have? Maybe they're not comfortable with sharing things in a meeting and they want to just have something kind of back at their cubicle and so forth. If we can convince them that, don't be afraid to speak it into your phone and just have it recorded there and it doesn't have to go to an AI, but just have a way of capturing it, transcribing it into your phone, it can do that service for you, and then you can decide if you want to put it into ChatGPT or cloud into further analysis, and group it with other project conversations to see if it can help with connecting the dots with whatever your goals are. So definitely don't be afraid to kind of use it as your digital audio note taker, because some of the things that it will pick up in terms of key insights, it could be impressive and helpful too.

S1- Yeah, I've actually as part of one of the incubators that I work with one of the speakers at one of those sessions brought up granola. It's a tool that you can talk to. It can continue to capture what you're working on, like all the audio playback and stuff. And it will keep the original transcript, but it will also turn it into notes. And it's very new for me, but it's interesting because it allows me to go back and say, I forgot this. Let me go back and look at it right versus trying to sort of comprise it all and figure it out all and play it back in my head and remember it and remember where I made notes. This is this is literally my personal assistant at all times.

S2- Cool. Are there any other questions?

S1- Anybody have any questions online? Any specific tasks you're trying to solve with prompts that Karen can show us? All right. Anybody in the room? Oh, here. There's a question. Something shown up. Oh, it was just feedback. Okay. Thank you. Karen.

S2- Yeah. And if anyone has questions afterwards. Absolutely. You can always reach out to me. I'm here to help. Like I said, I'm your champion. I want you to be a champion, too.

S1- Thank you very much, Karen.

Meet the speaker

Karim Gallimore

AI Conductor

Karim Gallimore is the AI strategist who asks the questions that make boardrooms uncomfortable:

Why are we investing billions in AI while slashing the training budgets for the people who'll actually use it? Why are we surprised when transformation fails?

As founder of Your AI Backpack and a Program Manager at AMD, Karim specializes in the messy reality of AI implementation that nobody wants to talk about... The human cost of getting it wrong.

He is a graduated engineering physicist, Certified Business Coach, PROSCI Change Management Practitioner, a Mindvalley AI Master and award winning AI Artist, giving him the technical credibility to understand what AI can do. His years leading organizational transformation give him the scars to see what it actually does to people.

Karim's work emerges from a simple conviction: AI without people is just expensive automation heading for failure. Through 'lead-by-example' education and strategic consulting, he helps leaders build transformations that don't topple over the moment things get difficult. Because they invested in the foundation and not just the visible growth at the top.