Jinjin Zhao - The interdisciplinary nature (mystery) of human learning and how technologies (supernatural powers) can help understand & improve how humans learn


Video Transcription

Hi, everyone. Welcome to my session. Nice to meet you all in the chat and please feel free to use the chat to uh leave any questions and I wanted to make it fun and useful to you.So please help me on that by letting me know um how you think. And if it is confusing, let me know where and when I'd love to address that at the end of this session and I'll make sure we have enough time to discuss. Ok, let's dive in. So the topic for today is the interdisciplinary nature of human learning and techniques for understanding and improving how humans learn. And I am Jing Jing, uh senior applied scientists, machine learning scientists at Amazon. Um as we all know human learning and improve human learning is huge topic.

And then we started learning as a baby and we continue learning throughout our eyes and this is how human progress evolves generation by generation. However, even we are experiencing it every day and second. And even now we don't really think much about it or learn much about it.

Of course, this session is an exception. We just focus on thinking and learning about human learning. But the truth is we are not yet able to come up with a comprehensive understanding of what human learning is and what we can do to improve it. And there's no doubt that understanding human learning is important because what we learn, how we learn constitutes who we are, what we want and how fulfilling our lives are. And also learning is complicated. It is complicated because of four aspects. Sorry, I didn't put in a slide, but I will just briefly talk through these four aspects. The first one is there's no ground truth of it. The ground truth of how we can learn in the most effective way for each of us, we learn by doing by failing by reflecting and sometimes by um others storytelling. In short, we learn through observations of ourselves and others and we have no direct access to the covert mental process, the brain and we all know brain is the most mysterious organ researchers have found. Uh And, and that is the first aspect, there's no ground too. The second is learning is different for each of us since everyone has a personalized model of learning in a unique way, unique view of what learning is. The third aspect, education system has treated us all the same in throwing us the knowledge that they think we should develop.

When we were a kid. They didn't consider what I need and how and when I want to learn it sometimes the way we got trained as difficulty when we want to learn something new because we have to unlearn things first. The fourth aspect is that meta cognitive skills were not built but should have been developed as we were a kid. Meta cognitive skill, meta cognitive skill is the skill of learning how to learn, learning how I should learn. Studies have shown that kids who have developed such skill when they are 4 to 8 years old, they outperform in their study in their career 1015 years later, and there's little teaching on meta cognitive skills and even though teaching on how much we need it so briefly. Um No Brown too. Learning is that diverse and we need to unlearn things first and we missed a great opportunity for developing the meta cognitive skills that makes human learning interesting. And today we are going to make an effort to discuss, discuss it today and I will be talking about the research work I have been doing in the past two years on human learning, um which is contextualized in corporate upscaling. And I decoupled a huge topic to subtopics and share some findings around how we can use machine learning, data, science, cognitive science learning, science to help understand what it is and how we can improve it.

Um All right, let's step back and think about what is learning, what comes to your mind when you think about human learning. So here's what I think reading in the library assignment of assignment, discussing with groups in a classroom or learning through online or through all kinds of experiences you come across like these events. Today, we'll be focusing on online learning through which learning experiences can be delivered to individuals at scale with curated design. OK. So what comes to your mind when you think about online learning? This is what I think first of all, select platform maybe Coursera online classrooms from universities and then pick a course you're interested. N LP Computer Vision Deep Learning 231 at Stanford. And you follow whatever is presented to you, read the content, answer those embedded questions, assessments, session after session and take the final assignment before you get some form of certificate. So here's my question. How comfortable are you in such a learning experience do you like or how much you like uh or anything you can think can be improved for you? And what I am questioning is like what this certificate means, right?

What does a degree mean and how much knowledge I've learned from this experience that I can apply directly to my projects. What is the gap for me to deliver an end to end N LP project? I have, after I have learned these materials and should I spend more time on those assignments? Is it helping to deliver that end to end project? I'm going to work on in my work. If not, what are the alternatives that is worth my energy and focus. In short, we as a human always wanted to be validated and only validated in the right approach that my skills can be validated for my projects. We not only want to be validated, we also wanted it to be understood in general. And also in this learning context, we wanted it to be understood by the tutoring system by the algorithms behind the screen. Is that feedback for me, not for everyone because I'm different. And um those are just the sample questions I I have and we should have towards an online learning system or agent. So today, we will be discussing the state of the art models frameworks techniques that can help us deliver a more effective and efficient learning experiences to our learners, an experience that understands what learner needs and provides good practice opportunities so that as a learner, we can not only develop the desired skill but also gradually develop that meta cognitive skill.

So we have focused on learners only. But in the ecosystem, we also have players like learning designers, learning scientists and pract practitioners. So let's also think about what is each player's interest as a learner. Um We have talked about, I want to learn, I want to know where I am, what is the path forward for things yet to learn and what I can, how I can demonstrate the skill I have developed to feel good about myself. And as a designer who create learning experiences, I wanted to discover what is working and what is not for my audience learners in a specific learning experience, how I can improve the design for whom is developing those skills and how to better structure the knowledge so that I can observe easily where learners got stuck and how I can provide feedback or guidance to help them out.

As a researcher or scientist, I wanted to discover what insights are helping learners to understand where they are, how to proceed and how they can demonstrate and what insights are helping designers to understand how effective the learning experiences are and where to improve and how and also the findings that will augment learning science theory and practices to find the truth of learning.

So here's the list of subtopics uh to get you into the story and after you have your mind with those question marks, I will stop and hoping that you will start thinking about learning every time you learn. So the first topic is about student model of learning, what it is it student model. Student model is a model that estimates where the student is in the learning in learning something where refers to the knowledge state. For instance, 100% mastered on something or 60% mastered on a subject. Why we need the artist estimation knowing where we are is the first step in deciding how to proceed and having the access to where we are at each step, we can gradually develop the sense of how we learn as known as the meta cognitive skill. But knowledge tracing is not easy because learning is a covert mental exercise. But there should be ways to observe how the brain is functioning, right? And there are different ways to observe that like brain imaging is for sure a solution and attaching sensors to your body to collect those biological signals. In addition to brain imaging is another and there's cutting edge research around those notions.

And for today, we're gonna talk about using quiz formative assessment to estimate another state. For instance, using the multi choice, a single choice or matching quiz, even an open ended constructive response question to understand and evaluate whether you're good to move on.

So in order to ensure the system is able to make a confident and accurate estimation, usually as a learner, you will be provided with a minimal number of exercise to demonstrate that you're not guessing or cheating, right? And the minimum number of exercise is decided by the complexity of the skill. And for each exercise, you can always take as many attempts as you need to practice and you learn from those attempts because there will be feedback provided to you as you fail on an attempt for a particular question. So after you have practiced this, have practiced all those exercise, the system has collected uh your sequential behavior and some mathematic model will take that sequential input to estimate your knowledge state. That model is called the largest tracing model. And knowledge tracing is that is such a task that looks at the sequential data and me where you are, but based on what we don't have access to the ground truth, thus defining what is the proxy of the truth. Um And why that proxy is the first subtask in conducting such another t chasing a task. But sadly, there isn't much work around how to find a better proxy.

And after defining such proxy of truth, how to make an algorithm accurate, reliable and interpretable is the second set of sub tasks. So here, I've made some efforts on building a student model. Um Things will get a little bit into uh get a little bit um technical. So type in any questions if you feel it confusing. OK. So the first word, I, I try to define the choose proxy proxy as the average correctness of the successive and attempts. So let's say you failed on the current attempt on the current assessment, but you are able to constantly succeed in the following five attempts on the successive assessments. Then it indicates that even you fail on the current attempt, you've learned something on the current attempt so that you are able to solve the successive problems. So your current knowledge, they shall be estimated as let's say, 90% close to proficient. And let's assume five is the sensical number of evidences you need to demonstrate for that skill. Uh And what people usually do is they only next attempt correctness as the truth. And I feel that's not com I'm not comfortable about that because we need a constant signal to be able to estimate confidently for those skills. And that's why I propose to look forward with successive and attempts.

The end needs to be defined by the course designer based on the type of the skill, the complexity of the knowledge, right. So in this figure, all the reds attempt correctness is average as a proxy. So just to illustrate the idea and as a result, uh with a few step forward looking, we can observe a clear trajectory of both actual and estimate knowledge state. And the accuracy of the estimation is is kind of very good. The blue line is the estimate and the yellow line is the actual um and actually is is it a pro we defined at the beginning? Sorry. But in comparison, if we only use the next attempt correctness as the proxy, there's almost no pattern can be observed. So you answer it either wrong or right is binary and the actual line is just jumping up and down. And how can you expand an algorithm to predict accurately and up confidently by just feeding it with those randomness? That's why the blue line, the model I propose it's better uh path forward. But there is still a question on the unanswered. Should we average the correctness or should we use some statistical models such as regression to better approximate what is happening in the brain and how to demonstrate, which makes more sense?

And the answer would be in learning such skill, it makes more sense in using this technical uh statistical model in this context. And another work I did is modeling behavior at a temp level other than an assessment level. So practically in most of the online learning tutoring system, people aggregate attempts into assessment level and model from there. The problem with that is the details at T level are steadily distinct. The Temple information tells us how the learner is making progress right along the same like within the same assessment. That's what I did is to enhance the Bayesian Knowledge chasing model with a temp level information. The Bayesian Knowledge Chasing model is a Hidden Markov model that models the probability of learning, guessing, flipping transitioning. And of course the largest state, we also did some minimal exercise estimate to ensure the model pros produces reliable estimation and offers insights to our designers about how many assessments they need to design. So that learner will have enough opportunities to practice, to demonstrate what they've learned.

So I have two papers last year at AC M learning a skill, feel free to check them out. All right, the second topic is around the personalized adaptive learning, personalized learning is not new and people have different definitions for it. I define it as a tailored learning experiences with accurate knowledge state estimation and targeted recommendation of the next best learning activity. When I say accurate, I mean it is accurate for you not accurate to an average learner targeted recommendation means a recommendation for you based on where you have been where you are and what you aim for. As we discussed about large state estimation, think about what we usually do in real life. Um When we learn not an online one, in order to learn where we are, we go to experience people or senior people to help us figure out where we possibly are by describing what we want and what we did and what we are struggling with. And since they are more experiences they will and they have or they have gone through similar learning experiences, they might provide you an estimate on where you are and what you could probably do as the next step.

And if they share some and some same past experiences, that will be awesome because they could know you better in that way. So inspired by that thought, people who experience this could probably know where you are. We would naturally start to think why not let machine do the job for us. The job means find people who are already our experiences experienced and look at what their territories look like and then look at your territory and do an estimate on where you probably are in the journey and what you could try based on what they've done up to their current status and actions they've taken that makes them experienced and successful.

So we can um further optimize learning tery by looking at the most efficient path forward for you. Yes, that is the solution. And the good thing about this, not only skills but also more practical and accurate in some cases. Because in real life, a mentor is not always easy to find connect. And the estimate the estimation from mentors is just a a perspective from someone. It might be biased. And there's no way to see if it is biased. And what machine can do is to scan all those experiences from others and view to out what kind of people strategy might be of help and calculate your current status. And the next steps based on info from others. To me that solution from machine is more transparent and much closer closer to the truth. And we also have some level of control over it. So why not? But we need to be prudent when we design the algorithm. Um what do you mean by experienced? Right. And how experienced that person needs to be whether the evidence is relevant and convincing. Part of the reason I love science is because every step I take will bring me closer to the truth. It also means most of the time we don't have a direct answer to those questions. That is why we need to do experiments and discover the truth or at least try to collect some evidence to support the unknown truth or even just to prove something is wrong.

Something is not working. So in this application, what we follow the same philosophy of doing science that is we first define the algorithm and we test it out and the algorithm we came up with. So the algorithm we came up with um use some similarity measure to define who are similar to you and who have been proficient in that skill. And then look at what they are the most common next steps and use that as a recommendation for you as the next step. And we test that out uh in our learning platform at Amazon. But sadly, I'm not able to, I'm not allowed to share that right now, but I hope I can do it soon, but I can say that I have confidence in that it is it is working properly. OK. And the sub question as we just stated is how to define who are similar to you or what attributes contribute to the similarity, the similar and experienced people from which you can learn, right. And in order to do that, I incorporate learner attributes to augment learner profile and hope the machine can figure out what the attributes really matter in what context. So a learner profile would include categorical features such as tenure, skill set job info.

And also the learning patterns like the learning pace learning style and the machine will take those things in and do its job as we ask it to do. And the part of the re result I can share, we found that at the beginning of the learning experience, we don't have enough data from the particular learning experience from the learner. Those categorical attributes plays uh played a more important role in measuring the similarity.

But as the system collects more data about the learning experiences from the learner, then the learning pace, the learning style contributes more in finding that similar learning uh similar experience learners, some details of the implementation is uh described in the paper.

And if you're interested, please check it out and we use attention layer uh to find the similarity and we use uh generated modeling tool variational encoder inference to embed attributes just for information in brief. All right. The third topic is the target feedback generation and insights a little background. Uh We have one big expectation for tutoring system is that they can generate feedback to a learner like a coach, a teaching assistant. There are two benefits of such a feature. One is learners can receive guidance at any time they when they need it. Another is it frees time for teachers to fix some other important tasks in teaching? What have been ex ex uh excessively looked at is how to conduct auto grading or design a 11 mapping feedback beforehand, for any possible situation. So that feedback can be surfaced as needed. All the reading is helpful, but it's, that's not enough because we need feedback, not just the score, right? Tell me how I can improve other than how bad I did it, right? And for multiple choice, single choice questions, maybe a predefined feedback is possible. For instance, if you took this distractor then work on this concept. But when it comes to open and constructive response question, it becomes difficult because the answer from the learner is no longer a predefined one. It is an openhanded response.

So we need the ability to understand what the learner is answering is communicating before we can offer any help. Thus, in this topic, uh we will be focused on generating target feedback for construct response question activity. It is an N LP application. The difficulty as I mentioned, it lies in how to extract semantics from uh the free form text and match it to what is expected and derive the knowledge gap so that we can provide a feedback based on the knowledge gap um where the learner needs to improve in learning that particular skill.

So especially there are three especially when there, if there's more than one grading point, we measure things becomes even more interesting. In this example, we have five grading points that we are expecting from learners and we aim to provide guidance based on what they've mastered and what they haven't out of those five points. Um So N LP is always interesting is always a kind of pipeline you can view. So this, in this case, the solution we came up with and that and it works well. This is as that follows. We first segment the learner answer and the model answer into trunks and then we extract the semantics from both the learner answer and the model answer and compare their similarity at a grading point level. After that, we derive what is answered successfully and what is not finally, we'll provide feedback based on what is not answered correctly. That is the knowledge gap or the misconception. The learners need more practice. Uh a bit, little bit more details in both segmentation and semantic extraction.

We compare um the graph based solutions, the statistical based ones and also the semantic based the transformer hugging face approaches to see what is working and when and what is not working and why there are quite a few interesting findings there. Uh And this paper is actually uh uh selected as the best paper in AAA IA I education this year. So uh check it out if you're interested. All right, let's get into the fourth topic, which is the conversational agents. One big concern for online learning is it lacks the face to face or in-person practice opportunities, even a virtual one, a virtual discussion uh not a prerecording. And any person practice is important, especially developing skills that is is to be applied in a conversation such as communication skill or interviewing skills. And on the other hand, you have voice assistant already living in our homes. And we want to see how like Alexa can help in a learning context, not just carrying on conversations, talking about weather or some random chitchat. We want to see how Alexa can support us both emotionally and intellectually. So I explored a little bit around how what role Alexa can play in the learning experience. And I designed uh two roles for Alexa. One is peer, one is coach.

So as a peer, Alexa is expected to understand what the learner is communicating and then responds with her own wit. Of course, I design the wisdom for her. And as a coach, Alexa is expected to whisper to the learner what needs to be improved or offer some hate. So in order to support those two roles, I developed a conversational agent that can be configured as needed in a framework, we have modules of uh voice recognition, scenario, identification, and branching conversation, conversation status tracker, natural language understanding, merging with hands, conversational recording and throwing it to the cloud.

And in order to make it a conversation, a designer needs to configure this uh framework design, how Alexa would respond if one is able to behave expectedly. If one is not able to, then what kind of haze to provide and also configure what are the different scenarios and the branches of a conversation um to make the conversation more natural and then complete. So we implement a reapplication for supporting, developing an interviewing skill.

And hundreds of learners are involved in that pilot launch. As a result, we found that learners who have practiced with such activity can outperform others by 10% measured by the first 11th attempt practice of where adopting measure. And I personally, I like I like when she whispers because um that shows her kindness when I'm not doing a good job since I hope I'm still having your attention. Uh We're gonna dive into the last topic, which is the most interesting one. And my favorite one, it is about the skill, skill models of learning. At the beginning, we've talked about the student mode of learning that is about large state estimation. And now we're going to talk about the skill model of learning, a skill mode of learning. It is a cognitive model of learning and it's a computational representation of how people think about and develop the knowledge within the subject area. For instance, very broadly, as a data scientist, as you perform a data analysis task, there are skills around data preprocessing, building a neural net to find a metric and studying the use case. Usually the use case study comes up to you have mastered all the other stepwise skills and skill model is expected to be fine. Great. It would decouple the data preprocessing into like cleaning statistical analysis. And for the statistical analysis, it would further be decoupled into different ways.

Skills, we need to do a statistical analysis like basic concepts like uh central limit theorem, maximum likelihood estimation and a cognitive mode of learning tries to structure the skill within the subject area so that designers can design a more clearly structured learning experiences.

More clearly means each step you take, you know the purpose, you know the desired outcome and the sequence of the learning steps can fit into how usually learning happens. So as a result, learners can learn more efficiently and naturally learn in the flow. And at the same time, when there's something learners are struggling with, we can easily identify what is the cost because we know the purpose of that particular piece of design. So we are able to offer offer you um hate or guidance. So it is very important and it's very difficult to construct a skill model, a knowledge map knowledge structure. Um The difficulty comes again uh from the COVID nature of learning. All the best we can do is to augment human intelligence with artificial intelligence and come up with such a cognitive model of learning. And it works as follows. Learning designers conduct cognitive test analysis or think about protocols with SMES and impact the knowledge into skill models.

And initial skill model is born with human intelligence and as the course is launched and the learners start engaging, we apply algorithms to discover the skill model based on the learner inaction data. So as the third step, it is to provide the discovery skill model to designers.

So the designers can take, take a look, take uh at the perspective from data and revisit the course design and incorporate the ones that is useful into the revisions of the course. So the algorithm we develop is is in the step two, a generative model that follows the rules uh to represent the laden factors of the skill model. And a Gaussian mixture models used to discover the skill model based on those laden factors. And I'm trying to make uh something simple but, but actually, honestly, when things come to brain and how brain works is always challenging and a little bit more fun to work to a calm. So, um that's, that's all I I touched upon five topics and thanks for staying in my session.