Build a Python Application with Mantium and the Power of Natural Language Processing by Svitlana Glibova


Video Transcription

Great. Well, it is uh 231 my time. So we'll go ahead and get moving. Uh Thank you so much for joining me for this uh little workshop. Um This is my first time presenting at Women Tech and I'm really excited to be here. So thanks again.Um We will be focusing on building a Python application with Manti and natural language processing. We'll as far as an agenda, uh We'll go through a brief introduction and just the most um the biggest aerial view that we can get on natural language processing or nop we'll talk about some of the features of Manti and what you can do with our platform and we'll talk about building um a small Python application and maybe go a little bit further if there is time and if there is interest.

So uh here is what you should expect to learn. And I'd like to also preface this by saying, um my goal here is for you to have something to take with you um and to learn from. So if you feel like you can't follow along with the code and the repo or don't want to, it's absolutely totally fine. So take a look at it at your um your own time and your own pace and um whatever you feel like you're able to get from it, uh That is absolutely fine. So hopefully, what you'll walk away with is um building a language processing prompt without having to use code. We'll be doing this in the Manti mu I uh we'll talk about why organizing a coding project is important and how to keep yourself from looking back at your project six months down the line and wondering how we got here. Uh So we'll talk about a few tools that you can use for organizing some Python dependencies and that sort of thing. Uh We'll execute that prompt that we created um with no code, we'll actually be using Python to execute it.

Uh And then if there is time, like I said, we can take this a little bit further. Um So first things first, an introduction. My name is Svetlana. I'm a software engineer at Manti M. Uh This is a company that was started uh at the beginning of last year. So we are super fresh. I have been here for about 10 months and this is actually my first um career in the tech industry. I come from working in hospitality for about 10 years. And prior to leaving the industry, I was a sommelier and a beverage professional. And at some point last year, it dawned on me that I needed to make a big change. And I went on to do a data science boot camp um and later joined Manti M um as a junior developer. Uh I was recently promoted to software engineer. I do a little bit of developer relations work uh here and there. I do some N LP work as well. So if you have any questions at all, feel free to reach out to me, get in contact on linkedin or send me an email and I would be more than happy to chat. Um So first things first, let's just talk about natural language processing and why it's important to the tech industry today.

Um So just big, like I said, big talk down overview uh machine learning uh is a subfield of artificial intelligence and it allows computers to learn rules and patterns without having them specifically uh specifically defined. So what they, the, the general process is we are um computers are extracting patterns from raw data and creating uh essentially equations between points of data that will model their relationships to each other. So natural language processing is kind of a subfield of this. Uh But it also includes computer science and linguistics. Uh This is the development of computational systems uh to be able to analyze process and predict natural language sequences. And when I say natural language sequences, I do mean speech and text. Um So just kind of, it's because a lot of these terms are used pretty interchangeably. I think this is a pretty good Venn diagram explaining of how these different systems uh overlap with each other. So we're in this circle of N LP. But obviously with deep learning and transformers, there is overlap here uh traditional machine learning systems with annotated uh text. And then of course A I encompasses all of this. Um so large language models are a fairly recent to the scene. Uh Transformer models essentially enable computers to process text without having the text be manually annotated prior. So we are um essentially allowing computers to detect patterns between uh sequences of text uh without giving them any direction.

Um And just something that I think is incredibly important to think about when we think about N LP um is that, you know, as, as technology professionals, we are responsible for designing the systems of processing language and language is such a crucial element of the human experience and it has so much more impact than just, you know, equations.

Um It can impact people's day to day, real people and real lives can be changed by the way that language is processed. Um So, you know, as kind of an older example of how um artificial intelligence can maybe kind of steer in the wrong direction is we can take a look at this example of a gender neutral language. Um Hungarian being translated into English, and you can see that there are assumptions being made about gender and activity and occupation. And while this language does not account for gender in activity or occupation, this translation to English is already making assumptions.

And what I think is incredibly important to remember here is that, you know, language processing is such a multidisciplinary field that we need experiences from or experience from all sorts of people, from people like me, from people like you from people with different, you know, social backgrounds and lived experiences.

And you know, this is, this is more than just um programming computers. It really has a tangible impact on the lives of other people and we need opinions from everyone um in order to make these systems more inclusive and more accessible. So the the mission of Manti M kind of tying all into this is to democratize access to artificial intelligence to enable people to use artificial intelligence and natural language processing without having to, you know, overcome the hurdles of um of algorithms and uh to be able to create artificial intelligence systems for their own personal use.

Uh and kind of lower that la that barrier. Um So um just to kind of talk about a few things that are possible with Manti. Um This is a um an interface that allows you to quickly prototype and iterate natural language processing prompts, which we'll talk about shortly. Uh And do so in minutes with a web U I, um you're able to add security features to control both the input and the output of these language processing prompts. And then with that, once you've configured your prompt, you can easily share uh these for other people to test and try out. Um So we're going to get started with that first. Um I do wanna give a little bit of a uh break for people to register and to get an API key with open the eye if you want, that is absolutely not necessary. But uh I welcome you to do so now. And what we're going to do is uh first sign up with Manti and the all you have to do is click, sign up. Now, it's a pretty quick process. There's absolutely no pressure to do so if you don't have the time. Um And you are able to um use our, one of our large language models if you would like to stick wholly within the Mandan platform, if you would like to use uh open A is GP T three models, what I would recommend you do is visit open a i.com.

Uh And, and you can just either log in or register using this login link at the bottom. So I am going to just mute for a few minutes. If anyone wants to give me a thumbs up when they're ready, uh We'll go ahead and get started with a Manti M prompt. Um So we'll just give it about two minutes. I'll be right back. Awesome. I see a couple of thumbs up. So we're just going to keep moving. Um If you have any questions, feel free to stop me or say something in the chat and I will try to address that as quickly as possible. Um So once you have uh created a Manti account and logged in the first thing that you'll see is this dashboard of you, this will become uh you know, more interesting as time goes on uh as you create prompts and uh have some activity on them. Um If you do have an open A I account, what you will be able to do is after you log in, you'll see in the top right corner, there is going to be an icon for your personal profile there. You'll be able to get an open A I API key. If you would like to paste that into your Manti M um application you can do. So if not, we can just continue with our own language model.

Um In case you want to add an API key, you can do so here in A I providers. So as you can see, we're compatible with a few different providers of language models, specifically open A I CO and A I 21 for the time being. So all, all you would do is copy your A I uh open A I key. Um from that profile, you would paste it here and here you can add your API key. Obviously, I already have one configured. So my view is a little bit different. Um Either way, um I will give you all just a couple of um prompts to kind of play around with. I created a couple of different uh language processing prompts and we'll talk about how these work shortly. Um But we have here um a recipe creator and I've made a few that are kind of silly um just for the sake of being able to interact with them in a fun way. So this is a share link for one of the deployed prompts that I've made. And what this uh what this prompt does is it constructs a recipe based off of text that you give it. Uh There is no recipe database. Uh This is just a language processing prompt and it takes patterns that you uh show it, show the language model prior. Um And then you can introduce new text to it. Um So if we're creating a new recipe, maybe you say that, you know, today in my fridge, I have some corn tortillas. Uh I have some salsa verde.

I found some shredded cheese and uh I don't know, maybe there's um I don't know uh some chicken thighs in my fridge so we can say execute. And what this open A I da Vinci model is going to do is based off of patterns that it's seen before is create a recipe for us. So I'll show you the pattern that I introduced to it shortly. Uh But as you can see here, it's given us a recipe name and some directions, sometimes the directions are a little silly and sometimes they make a lot of sense. But the better information you've introduced to this model, uh and more accurately, uh the better off your results will be. So, uh what we can do now is back in your dashboard view. Um You can click on A I manager and you'll see here uh prompts. So this are uh this will be a menu of all of the prompts that you've created. As you can see here, I've made quite a few. I am a fan of recipes and poems. Um But they are obviously business use cases for uh language processing as well. Um So what I'm going to do actually is just open one of the ones that I've created in the past and walk you through the different um steps of configuring a prompt.

So what you'll do is just click, add new prompt and you'll see this, add new A I screen. What I will do is open up one of my existing prompts and I'll click configure, we'll have um some prefilled information. And if you don't have any text that you would like to uh use with this prompt, I will send a link right here and there will be some uh prep prepared recipes for you just to paste into this prompt Um So going through this, uh just a couple of things, um Obviously, you'll want to name your prompt.

I typically uh in the description include what provider and what model I'm using. So I can take a look at it and reference it in the future. Um So you'll choose a provider. I've chosen Manti, I've chosen an end point of completion. And what a completion endpoint is going to do is it's going to generate text based off of patterns that it's seen before. Um And I'll show you how this works with prompt text. So we're essentially creating a pattern each time we introduce a recipe here. What you see is a prompt to create a recipe with the following ingredients. We have a excuse me, a list, we have a blank line, then we have the title of the recipe and then we have directions followed by a colon, followed by £6 signs, which is also known as a stop sequence. And once again, you'll see the same pattern, create a recipe with the following ingredients. Here's a list, here's a space, here's the title, here are directions and a colon. And once again, you'll see the same thing. And what we want to do is essentially kind of take turns completing the pattern. So if I say uh £6 signs and I say, create a recipe with the following ingredients, what the computer would anticipate seeing next is a list.

After that, it would expect a space, a title and directions. So when we say uh we'll add a stop sequence here and once it reaches the stop sequence, it's going to stop generating test. So we have £6 sign and down here in input, we can give it a shot. Um And we'll try it out with the ingredients that I pasted before. So create a recipe with the following ingredients. The same ones that we saw in this create prompt and when we click test run, it's going to complete the pattern and the, the better your patterns are, the easier they are for a large language model to follow. So what you can see here is that all of this input text will display, you'll also see no spaces whatsoever, but you will also see a, a name for a recipe. You'll also see a set of directions and then we stopped and then we also have sometimes it over generates, we'll have another recipe, we'll have a set of directions and then we'll stop. Um So what you can do here for quickly iterating is you can play with the, the inputs and outputs and see what works best for you.

Um You can see that there, this blank line tends to work best because there is a line break between the colon here and the beginning of the um list of rec or list of recipe ingredients. So after you have um iterated upon and um tested your prompt and you're getting outputs that make sense to you. Um What you can do is you can save or you can deploy. And when you deploy, what you'll see is uh a few settings for obviously things simple things like your prompt, name and description. Um Well, we have an author name and then we also have ways that you can deploy this prompt. So with a public prompt that is live, you are able to share this with anyone. Uh All you have to do is send them the link that this creates. And when we click deploy, we'll see that it opens up another window and you now have a URL that you can share with others and they can do the same thing. Um So hopefully that all made sense to you all, if you have any questions, feel free to ask. And the next step that I would like to take, I'm gonna take a look at this time. Perfect. So we have um we have about 20 minutes.

So we'll go through the kind of introductory level of using something like a prompt um in a piece of Python code. So as you can see here in this github repo that I shared, uh I created an entire application. We're not going to worry at all about the front end, but I did make a really simple um react front end that you can use to interact with the Python backend. I will forewarn you. I am not a react developer. I worked on this for about week and there are still some bugs that could be worked out. Um But you know, it, it was a fun learning exercise. So, um in this repo, you'll see uh in the back Manti and back end, uh there are going to be a few different files and a few different folders. So, um something that I'd like to talk about as you know, a person that's relatively new to the tech industry is how intimidating it was to walk into a code base and, and look at these files that I've never seen before and be like, what on earth are we supposed to do? And what I realized is over time and, and trial and error is that there are some really important organizational things that you can do before you start writing Python code. Um So in this repo, I used a um a library called Python Poetry.

It enables you to control and manage your virtual environments which are environments in which you will be developing and deploying your Python code for uh using different libraries and leveraging different libraries. What poetry enables you to do is quickly add libraries both to your production and your development environment. Um And be able to describe these in your uh in a pipe project dot toml or TOL file. Um So, as you can see here for this particular application, there are dependencies for production uh that you'll see in lines eight through 13 and there are deve uh dependencies for development. So when you're testing your code, you're going to want to do things such as Flin um which will standardize your Python code and make it. Uh Yeah, you standardize it is, is a good way to describe that. Um In your make file, you can construct scripts that will automatically run uh these different commands. So you can set up commands for running testing with certain flags, you can run linting formatting whatever you need to do. Um And this just helps you organize and automate your development process um keeping things, you know, uh a little bit less all over the place.

And instead of having to remember and hunt down all of your different uh testing and linting libraries, they are just right there for you. Um And then I've created a run shell script. So a shell script effectively executes commands in your command line. And what this will do is it will um start a UV corn server in which you can deploy your back end code and actually uh be able to see what's going on uh as well as interact with uh you know, with different clients as well. So inside this app repo, what you'll see is a few different Python files and we're only going to focus on two of them for the time being. So we have a load end dot P file and we have a run prompt dot dot pie file. What you will need um is first to configure and, and some environment variables that you can use uh in order to uh access Manti. Um So with the Manti M Python client, you'll need your user name and your password. And uh I created a demo file that you can replace with your own values here. So as I mentioned here, please don't use an encrypted environment variables in any kind of production. Please don't share them um to publicly shared repos. This is just for demoing purposes. So um if you do want to run this code locally on your machine, you would just replace these strings with the proper values and prompt ID.

Um You can find right here uh which is at the very end of this URL, this is a unique ID that ties your prompt to this identification. Um And this is how you access them from uh from different applications. Um So back in this back end code inside of applications, we have load env. So what we're going to do is use a Python library called.in and we're going to load those environment variables uh so that we can execute this code. So because we have that ENV file, we are now going to load user name, password and prompt ID and we're going to return them and we're going to return them to run prompt dot P because as you can see here we've imported low and we're going to get our user name, password and prompt ID.

And then we are going to get a behr authorization token by essentially performing what you did by logging in using your user name and password. We'll then retrieve the prompt with that ID. And now this object holds that prompt ID. And what we're going to do after that is we're going to run this prompt with input text. So input text is going to be our variable that is interacting with prompt results. And what this will do is just like you clicked that execute button in the share app. We're going to essentially do the same thing and we're going to make sure that we're not getting an empty result and then we're going to return that string. Uh Essentially this string. Uh We're going to be at returning that back to our application. Um You'll see that inside of main dot Pie, we have a few things going on, but the thing that I wanted to point out specifically is going to be, excuse me, I have dogs. Um um What you'll see here is um we've created an api endpoint. Um And with this api endpoint, we're going to take a string of values and we're going to run that prompt results um function that I just showed you with the ingredients input string. So we're going to execute this prompt with an ingredient string. We're going to print it just to make sure we're all good. And then we're, we're going to return it as output.

And since we have a few minutes, what I'm going to do is just start up my servers locally so that you can see what happens and you'll get to see this beautiful react application that I built. Um So as you can see here, I have a dot ENV file, I'm not going to show it to you because these are my credentials. Um But we are going to start up the back end server. We're going to start up the front end and we're going to watch them interact with each other. Um So here I am in my command line. I am currently in the working directory of Women Tech, Global Manti M back end. So when you are in this directory, you have access directly to your poetry files and your uh run shell script. So what I can do is because I have um this shell script, we can run run dot sh and that's going to start at my back end and you can see here that we've started it on local host port 8000. And in this one, I am in the Manti M front end directory, which is right here. This contains javascript code. Um So we'll be using the node package manager to start up the front end. Let's see. Oh, something is something has changed. Yeah, of course. Live demos.

Things change one moment. I'll be right back. OK. Yeah. Right. Thanks Jack. Um We're reinstalling some uh node dependencies. So hopefully this will run if not, it's totally OK, because you'll be able to run this locally if you can. Um But so it goes uh hopefully so far you all have found this useful. Um And we're going to wait and see if these uh these no dependencies reinstall. We're gonna take our sweet time reinstalling. It really can be. We're uh we're, we're, yeah, let's I'll reshare my screen so uh we can witness those together. Hopefully, my computer doesn't sound like it's taking off like an airplane, but we'll see. Ok. So let's see if this works this time. Yeah. Ah Here we go. Beautiful. Now we are back. We're back to local host 3000. Um And like I said, yeah, I, I am not a react developer. I thought it would be a fun uh fun exercise to try and build a front end uh while I mostly do back end work. And let me tell you uh that was, it was an adventure for sure. Um So hopefully we've got our back and running as well. So I use um I use a package called Fast API for Python um which is a really great way to create restful applications and you can create routes for performing uh different tasks. And what will happen is that this front end will be communicating with the back end.

Um So what I made here and just so, you know, there's no database. This is all hard coded into just a Python file um that we can see in the back end here. I have an ingredients list. I just created a list of um a list of dictionaries that this um front end will fetch from the back end. Uh So as you can see here, we can add ingredients. Um But I just have a few things on the um loaded in here. Um Already as I recently had an anchovy pasta that I was particularly fond of. And so what's gonna happen here is when I click, give me a recipe, we're going to be calling uh using the Manti and co to that prompt that I showed you earlier. So when we click, give me a recipe, hopefully it will actually give us a recipe. Um And here it is. So we have run the prompt and just like um we saw in the prompt uh that I showed earlier, we now have a recipe for spaghetti with anchovies and butter. This is very not beautiful react, but it does a thing. And uh this is, you know, at least mostly functional code for you all to take with you.

Um and program however you see fit, I would love if you have the time to, you know, play around with us and make some changes um and see what you come up with. So uh I think we are, we're nearing time. Um I am probably just going to start wrapping things up. If do you all have any questions or um feedback or anything that um you might find useful as you take into your, your programming journeys? I actually, yeah. Uh I saw that meme today. I was chatting with a winner of our, with one of our front end developers this morning and I was like, oh node modules. Uh This is literally the heaviest thing I've ever seen in my entire life. So I'm glad that it, glad that it worked out. Um Sweet. Yeah. Well, um yeah, I hope that you learned something and felt like this was worth your time and you were able to, to get some useful information out of it. Um I will mention that um once again that Manti is the first tech company that I've ever worked at and it's been absolutely delightful. I have learned so much. I feel like we're making a meaningful impact. Um Please, you know, give us a shot join. Um Try out some prompts, construct your own. Um If you have any feedback in this, read me um that I made for this project. I have a couple of links to our Discord server um as well as our, our developer docs.

Um And we are totally here for, for feedback and improvement and I'm really looking forward to seeing uh where this, this, you know, product and company goes in the future. So thank you so much, Irina. Um really appreciate you joining. Uh I was really happy to do this. Yeah. Add me on linkedin, send me an email. Uh I will be around so awesome. Yeah, Carrie, welcome to Python. Um If you have any questions, feel free to shoot me a message. I'm also happy to help. There are so many things that I've learned in the last 10 months that I don't think that I can even put down on one piece of paper, but it definitely starts to make more sense as you keep going with it. And um definitely, I would encourage you to try and create setups that set you up for success in the future. So whatever you can do to automate your Python processes, highly recommend. Um Thank you so much. Thanks y'all. Enjoy the rest of your conference. Um Yeah, and, and hopefully Helen this um this was uh informative for you. Um But yeah, so enjoy the rest of your conference. Thank you for joining me and hopefully, uh we'll be in touch.