Demystifying AI for University Admissions Leaders

Click above to watch video.

Will Rose: Hi, everyone. My name is Will Rose, I am the Chief Technology Officer at StudentsSelect.ai. We're a provider of  AI-powered solutions and advanced analytics for higher ed, specifically in the admissions space. We're proud to sponsor today's webinar which will cover some important items regarding AI technology for admissions leaders, which is becoming more mainstream and higher ed.

I'll be monitoring the Q&A area in the zoom webinar, so please feel free to enter any questions you have throughout the presentation. And we'll be sure to cover them during the Q&A portion of the webinar, which will come towards the end of the presentation.

I'd now like to introduce our presenter Dr. Emily D. Campion. Dr Campion is an assistant professor in management in the Strome College of Business at Old Dominion University. She is also a consultant for Campion Services. Her research falls under the future of work umbrella and includes topics related to machine learning and natural language processing, and personnel selection, alternative and remote work experiences, and workforce diversity. Emily, thank you, and I'll now hand things over to you.

Emily Campion: Fantastic. Thank you so much, Will. I'm excited to be here talking today about demystifying AI for university admissions leaders.

It seems we can't go anywhere without thinking about artificial intelligence: from our phones and predictive text, to using AI to detect diseases more quickly. And for all the advantages that it can bring, there are still many challenges along barriers associated with artificial intelligence that we'll talk about today. And so, individuals and institutions are likely to be smart consumers.

Anyone who's paying attention to the AI landscape would be able to observe that it is a lot like the Wild West. There are not very many regulations at this time, particularly at the federal level. Most are occurring state by state. The one you may be most familiar with is the video interview act in Illinois, which I was looking up. I was looking at this website that we can send out that that tracks this, and I saw that they recently amended certain parts of it regarding reporting demographic variables. So something to keep an eye on.

And while higher education has been pivotal in the development of artificial intelligence (in fact it earned its name "artificial intelligence" from an academic conference, many years ago many decades ago) much like the federal government higher education moves slowly, like a big ship, but inevitably artificial intelligence has made its way into higher education. Not simply being being taught necessarily, it's been there for a while, but in terms of being, you know, part of the decision making process, because higher education has traditionally been quite conservative with those things.

But we we believe it's sort of happening at this time due to shortages in the admissions process. So those shortages come in three flavors, although I suspect there are of course many admissions folks in the room, who would say there are a few more. These are sort of the broad three we've identified.

First are tighter budgets, and we had tight budgets before COVID. As a faculty Member I'm well aware of some of those budget cuts that occurred during COVID, which has made it even tougher. And budget cuts often mean fewer staff, and fewer staff means less time with candidates or applicants. And we've also seen admissions officers use, or have access to, less information on candidates. Now this has happened in a couple of ways. The main one has happened in service of improving representation on campus and improving access to education.

Something that we've noticed during during COVID is access to standardized testing became extremely difficult, and so higher education institutions tend to make some really tough decisions about how to assess applicants without standardized testing. And so we're seeing this reduce reliance on standardized testing, and, while again that's in service of a very noble goal for higher education, it also leaves admissions with some with less information to work with. Which of course is a massive challenge due to this important decision.

So let's define... so we believe that artificial intelligence can really help with this. Particularly with leveraging resources more effectively, particularly if you don't have very many, and then helping to make better decisions. So the goal today is to speak about how it can be helpful, but also speak about its challenges. As an academic, I think it's my job to also ensure to say this part of that research is not well developed yet so be cautious.

So let's define AI. Unfortunately the definition is very, very broad, for those of you who may be more familiar with it than others, it's simply about the imitation of human behavior by computers. And it's at this point I like to remind individuals, whether or not I'm speaking to a crowd like this or speaking with students, that we often, when we think about AI, we think about human replacement and that's fair because that's the way it's been spoken about in popular press. And we've seen that ourselves, with technology replacing humans, it's happened since the very first industrial revolution. Now we're on our fourth. It's characterized by a number of things, but particularly intelligent systems such as AI. But in the case of admissions and hiring, now most of my research in this area is in employment, not higher higher education, but admissions is an analogous context, I believe, so we can we can maybe draw some generalized ability from those findings. But it's not to replace human decision making, for several reasons, first we need more research to be able to do that, and second we're making decisions on humans. And when we make important high stakes decisions on humans, whether or not that's through employment or whether it's through admitting to a university, humans are very sensitive to justice in those instances. So it's really important we see AI as an aid or a tool for human decision makers, not a replacement. So that's our baseline for this presentation.

AI is really helpful in really two key ways. So the first is it's really helpful when tasks are repetitive, frequently performed, and time consuming. So let's look at a higher ed exam education example.

So it seems to me that higher education, many, many institutions have done a lot of work to generate content, for not only their current students, but potential students, and I can imagine the admission staff experience a lot of the same questions over and over again around things like "where do I actually submit my application," "what materials do I need," but of course you guys have already done the work of putting that online somewhere. So we can envision chat box being used to help applicants navigate toward that information without taking the time of a staffer, who's trying to do many, many other things, and probably wears way too many hats. And so we can we can envision chatbots being helpful there and guiding toward the right material for students,.

The same time, in addition to admissions putting this material and content online, I know that programs put material online for potential students, like vignettes or experiences by alums, and so they can navigate through this information, using a chatbot to guide them, to gauge their fit with a program, or use that to find their way to instances where they can speak with a mentor instead of, you know, burdening trying to navigate through people who are really busy. And so using natural language processing to develop these models can be incredibly helpful. And then, of course, once they become students, we are seeing chatbots being used more so in career services and student affairs to help workers there as well.

The second way that AI can be helpful, so remember the first is about, you know, repetitive, time consuming tasks, this one is about being able to synthesize large amounts of information from diverse sources. Now our brains are incredible. They just, they are, but we are not great at doing this repeatedly. We often have our own biases that come in. We run out of cognitive resources. Put differently, we get tired, and so these sorts of things can affect how we synthesize information. So if we leave it to an algorithm that we train well, and it does the same thing every single time and it's really good at this, this is what they're built for. So let's look at a higher end example.

So you can imagine you've got, you know, thousands of applicants with lots of material, and they do such a great job submitting all of this material, and generating it, and you want to see all of it I'm sure. What we can do is use AI to actually extract information from student materials and then train a model to create composites. And then from those composites we can create, you know, three tiers, for example.

The first tier our students that we know are going to be admitted. We, you know, we know students who have their characteristics, their grades generally do well, and so we can move them through the process, and we don't have to spend more resources trying to evaluate them. We know from our research using AI that they're going to do fine or they're predicted to do fine.

But then we can spend more time, particularly on tier two, giving them additional resources, or spending time trying to figure out what their– whether or not they fit with the institution or the program they're interested in. And finally, this offers a quicker feedback loop, so instead of, you know, going home with stacks, either physical stacks or digital stacks of materials from students, and this being, you know, a multi-week, multi-month process, we feed the algorithm it offers this information back quite quickly, which, I mean we all remember, you know, trying to get into college and that experience being so nerve wracking. So having that information come to us more quickly as applicants, as well as– is certainly a benefit.

Now something we are seeing tied to this notion that we have less information we're making these decisions on now than before, something we're seeing is schools being interested in social media scraping, is understanding maybe features of our students by scraping their social media, whether or not you see some examples here, you know Facebook or Twitter or Instagram or any any other type, the data on this, that the findings on this are really mixed. Most of the ones I'm familiar with are out of hiring, and we really have two camps, and our science and my science about whether or not they should be used in the analogous context of employment.

The first side says: we do see that this predicts beyond the hiring decision. It does have information that's useful to their behavior at work.

And then the other side says: listen, this is an impression managed diversion of a candidate. They are not bringing that– those characteristics, to the workplace or to the classroom. And so, quite frankly, the research is a bit mixed as of yet. This is one of those areas, you'll recall the very beginning, I said there's some places where we need more research in order to make decisions as to whether or not to operationalize, and this is one of those instances.

Another way that we can use artificial intelligence to synthesize a lot of information is using it through automatically scored video interviews. Now video interviews offer a lot of flexibility to staffers who were making admissions decisions or hiring decisions because students can complete these at their leisure and admissions can look at it at their leisure, which offers a type of flexibility that planned interviews just simply don't. The research, and this again is mostly out of hiring, those who do it well are doing it using stru– the long history of structured interviewing out of employment. There's also a little bit of research that has looked at facial analysis, and the evidence is a little mixed there as well. As some of the companies who have done that have actually put that to rest because of public opinion that it was concerning which, of course, is something we have to think about when we use artificial intelligence. So this is a really robust area and one place, that I think we will absolutely see higher education using this, if they don't already. Not the facial analysis, but the other features we can extract from the language they actually use. So pulling content of their interview responses.

So I've touched already on a couple things that make AI promising, but let's try to tie them to those resources shortages. So first we said there, you know, there's tighter budgets and that's fewer staff. Fewer staff means less time with students. Already we've talked about a couple examples where we can alleviate the admissions officers from spending so much time sort of trying to evaluate these materials by using an algorithm to help them do that. Enabling them to have more meaningful facetime with applicants, which is really the part that matters so much in admissions is speaking with students. I know that's something you enjoy or else you wouldn't be doing admissions.

Now the second, no, third shortage, but the second one here, is limited information. So again we're working with less information, and it's not just because we're working with reduced reliance on standardized testing, but also because students provide such an incredible amount of text data in their submissions. But historically text data has been so difficult to score. You've got thousands of applicants who are submitting personal statements, responses to essay questions, or maybe interview questions, resumes, letters of recommendation, transcripts. You've got all of this, that processing as a human and processing thousands of these as a human is such a cumbersome task. And so, if we can offload some of that and use natural language processing to extract information, this really offers us an opportunity to then combine these with the data that are already quantified, such as GPA or other scores they may have, to create those composites I was mentioning before. But really importantly we think coming out of this– these data are are characteristics that aren't currently being assessed in an admissions system. So personality variables related to student outcomes, for example, may be missed. Again, we ask students for a lot of things, but there's a point at which we can't ask for much more, so we can actually use information they already give us and use natural language processing, which is a type of artificial intelligence, to extract information and inform our decision making.

Let's talk about these other characteristics for a moment. I'm going to stick with personality, mostly, because I think a lot of people understand broadly, you know, what personality is, and also it has a really long and rich history, not only in employment, but also in admissions. So personality research helps us understand how people are going to perform on the job or in the classroom. I'm sure many of you are familiar with the five factor model, also known as the Big Five. Maybe some of you were taught that it's the acronyms ocean or canoe. So of those five variables, which are sort of the five main personality variables, we tend to study.

Conscientiousness is the most consistent predictor across environments. The individuals who are high on conscientiousness are organized, they're planful, and their achievement oriented. And we can imagine that of course highly conscientious students are much more successful in undergraduate programs. Myself, I was in the classroom yesterday, I taught two classes, and I was thinking about this and thought, "Yeah, I mean, my most conscientious students are the ones who tend to do better in my course."

What AI can also offer, and I think that this again is I'm sort of speaking from a faculty member perspective at this point, crossing over that researcher to faculty member or teacher role: something that we struggle with so much with students are the red flags. Now, what we can we can do is use artificial intelligence to extract again. Extract this information, but we can then do analyses to see what are some of the red flags we can identify early with students, either in the application process, so we can ask them during the interview, maybe gaps in their history or concerning responses, and even more so, how can we identify barriers early on. So what are the types of characteristics that are predicting things like likely to drop out or likely to go on academic probation. Again being a faculty– being a teacher, I don't– I see them when they're juniors and seniors. Sometimes I get them when they're sophomores. For some of them, I'm the first sort of 35 person class hey have, and so that's the first time that may be a faculty member actually notices, "Hey, you're struggling. Let's talk about what resources are available on campus." Campuses spend so much time and money developing these resources, and students absolutely use them, but there are students who really need them that don't know about them. And so, if we can identify those things early, we can get there earlier and intervene and offer these resources right off the bat. Now, we cannot force students to to use these resources, even though sometimes we really want to because we know there'll be more successful, but we can at least offer these things and introduce them to it early, as opposed to waiting until, you know, their junior/senior year when they finally have small classes and some of those barriers are realized by faculty.

So in addition to that, you know, personality and things that predict student success, we can also examine characteristics required of the occupation. So we can look even further ahead, and where the students are and say, "What are some things that this student has that we know will predict their success in their chosen occupation?" This is said understanding that students change majors, and so it may be the case that there are– actually, it is the case, that there are universal characteristics that do predict in each occupation, but for those who maybe aren't going to graduate school, this is a really important feature. Let's take nursing, for example.

So, nurses, we of course want them to be conscientious, but in addition we'd also like them to be decisive,, have good bedside manner, which you know those social skills, we want them to be service oriented, and rule based. And so these are the sorts of things we can extract from that text data and actually use to try and predict additional outcomes down the line, such as their success in the occupation.

Speaking of alternative outcomes, those what we just talked about were things from the applicant side that we can draw from their materials they've submitted. Of course, we can also look at alternative outcomes. So the other end of the model. We can look at performance metrics on campus. So in the classroom, how are they actually performing, and train our data to that, instead of the hiring decision. Talk more about the implications for that in a moment. We can talk about program completion. There's not a single university I'm aware of that doesn't think a lot about retention. It's massive problem. It was a massive problem, of course, before COVID. It's a massive problem now. And then licensure exams. Down the line, this might be more relevant for graduate schools, but again, can we predict what features predict whether or not they will actually pass their licensing exams? And then finally, sort of in our minds the gold standard for– especially for graduate programs, although potentially undergraduate as well, is performance on the job post graduation. And the reason I say "more so for graduates" again is because we understand that that the average student does tend to change their major, maybe a couple times before they do graduate. So train it so modeling that might be a little bit trickier, but certainly for a graduate program this would be useful.

Now Will kindly mentioned that one of my research areas is workplace diversity, and that really takes a couple of forms, though most relevant here is my research on adverse impact reduction. So I again mostly do that on employment, but my– probably my favorite area of research is finding these sorts of ways that we can reduce adverse impact and employment selections or admission selections. And, in my opinion, no conversation on AI is complete without talking about bias.

So let's dig in a little bit and see what some of those those areas are where some of those complexities lie.

So data, excuse me, bias in artificial intelligence tends to occur from two sources. First the data that the model is being trained to.

What does that mean exactly? So when we train a model, we're training a model to a decision so that it can replicate that decision down the line, without humans actually doing any scoring. So you build the model by doing an extreme amount of scoring (it's the next maybe two slides where we talk about that a little bit more), but we train the model to a human decision. And then we feed the algorithm data, and then it tells us what that decision would be based on the model that was developed. So if you're building a model, if you're training your model to historic decisions, and those historic decisions show sub group differences, you're going to perpetuate those subgroup differences. I'm sure that many of us can think of examples, probably from the news where we've heard about artificial intelligence, being a model, being trained to human decision and it, you know, it's disadvantaged women, for example. We've seen this happen, and that's because you're training to a biased data set, so of course you're going to have bias. But this is all part of the learning process with understanding artificial intelligence.

Now. A good example, most recently, is Brookings just did a– had a recent article on using AI to determine financial aid amounts. Now I think, in sort of theory, this sounds like a good idea. That's a tough decision. That's a big decision that comes with a lot of data that needs to be processed, so that they can make that decision; however, if you've been paying attention to artificial intelligence research over the last 10 years, or its applications, you would remember that there were, you know, several years ago banks that were trying to use artificial intelligence to make lending decisions on loans. And they trained it to use historic decisions, and historic decisions had bias. And so they found evidence of lending discrimination.

Now, most of you at this point might be thinking, "Wy would I use this if it has this potential issue?" Well, fortunately, areas of research, such as interviewing or psychometric set of psychology, help us understand why that's happening, and offer us opportunities to reduce it. And so here's one way that we've done it, I've personally have done this out of my consulting and also in research, where we're using– we're having multiple human raiders per candidate, not just one person sent given making a decision. It's three or more human raiders making– giving ratings on candidate materials using systematic, you know, systematically scoring with anchored scales. So what does what does that mean? That means instead of one person saying, you know, "one to five, how do you rate this candidate and one is, you know, 'probably won't succeed' and five is 'definitely will succeed,'" I mean my definition of that might be very different from Will's, for example. So that's not offering us much reliability, which is very important, because we're trying to model this over and over again, and we need it to be reliable.

Instead you're going to use behaviorally anchored rating scale. So a five out of five on leadership would be this person communication–, you know, offered evidence they communicated with their team, for example, in an interview question. This person motivated– offered ways to motivate their team. This person communicated the outcome of their, you know, leader behaviors That would be a five.

And so you can see how that's very different from having one out of five, you know, five this– they'll probably succeed in this program.

And then, finally, of course, reliability. Nothing– very few things matter in psychometrics more than reliability, and so, again, really using the research we have in other areas that has shown for decades that this sort of structuring of scoring absolutely helps reduce bias.

The other way, we can help in this way is by monitoring. And this seems so silly, and it probably might even– maybe be too obvious to say, of course, we're going to we're going to monitor this. But we monitor it in the same way we monitor human systems. Right? So if you look at admissions, you know, systems, I'm sure that you guys are monitored in the sense that you guys are checked in on, and you are insured, and you look at actually the distribution of individuals who are being admitted, and you constantly check and reiterate. And so it's the same thing that we do with systems that are purely built from humans. We do that with AI. It requires updating as well, because the world evolves, and we need to update, particularly when it comes to language use.

The other way that we see bias enter our models is– so the first way was how we're training. Are we training it to bias data? Then we're going to get an output that we don't even need to think about. We shouldn't even look at. The other way is through applicant data. So it may be that applicants explicitly mentioned their race, gender, age, or other type of protected class. Of course that can introduce bias in the model. The other way that sort of alludes people sometimes are proxies. So, because the United States is still geographically segregated in many ways, zip code acts as a proxy, for example, for race. And that's something that needs to be considered. I'll tell you more about the internship one in a second. That one was really interesting. But these proxies can be– can hide from us, and we don't realize that they're really happening. But once we see them, we say, "oh yeah, of course that's a proxy for race or gender or age, and we should be really cautious."

We reduce this by ensuring, first of all, just ensuring those explicit mentions aren't there in the data. We should to take those out, and then we conduct analyses. We look through, and we say, "Are any of our variables, anything that we're drawing from the data?" Showing subgroup differences and what did those mean and again conducting those analyses to ensure that those aren't biasing our results in a way that disadvantages protected groups.

Now there's a lot of– there are many organizations that care about this, many researchers who care about this, and lots of ongoing research to try and understand how bias occurs in our models.

And this i'll give you this one example, because I thought it was interesting. This one organization does some analyses and found that internship was a proxy for age. Because if you can imagine a 23 year old when they're, you know, applying for a job, and they're answering interview questions, what are they– what professional experience do they have. Likely not very much, if they go sort of the traditional route, where they go straight from high school to college. They probably don't have a lot of professional experience. But they have internships, and so they speak about their internships. Meanwhile, someone in their 40s, why would they speak about their internships from when they were 22 when they have, you know, 15, 16, or 20 years of professional experience to speak from. So you can see how some of these proxies hide, but they require us to do additional analyses. And once we do those, we can do a much better job of cleaning our data to reduce the likelihood that we're biasing our models.

So my takeaways for you guys are: first, AI requires monitoring like any other system, takes a little time to build. It does save time and money in the long run, but it requires monitoring, just like just like humans. And we do have some early and compelling research that AI can actually be a really useful tool for admissions officers.

And we move on to Q&A. Thank you.

Will Rose: Great. So, Emily, we have a couple of questions that have come in, so we'll jump right into those.

Emily Campion: Wonderful.

Will Rose: So, so this is a good question, you know, you touched on some of this when you had the nursing example for some of the traits that, you know, we might find helpful, but what are some of the other personality traits that seem to be good predictors of success?

Emily Campion: Hmm. That's a good one. Student success, we can envision things like adaptability. We can imagine things like grit to be strong predictors. Now, admittedly, I'm not speaking directly from research on– in the admissions space. I'm speaking more from the hiring space. I hope that's all right, but I do believe it's analogous here and so I would say, you know, conscientiousness, of course, that achievement orientation which we can measure in a couple different ways. And then adaptability. And then things around– like critical thinking skills would be some of the big ones, particularly in healthcare, to play off the nursing example. But I can– if anyone, I should mention this, if anyone is interested in additional resources, I can certainly provide those.

Will Rose: Great. You know kind of along the same lines you  talked a little bit about red flags. So one of the other questions that we have: are there any personality traits that might be considered red flags?

Emily Campion: Yeah. So we have, in personality, we have– we've got the big five that I mentioned, we also have something called the dark triad, which is, you know, narcissism, psychopathy, and Machiavellianism. They sound very threatening. These are not– these– we're not talking about like the narcissistic disorder. We're talking about personality traits, and some of those have been shown to be offering, you know, counterproductive behaviors or predict counterproductive behaviors in in the workplace. And I think you know that's absolutely something we could use to predict in class as well.

You know if there's elements of too much narcissism than– or– (actually too little I'll get to that moment) too much narcissism we see students aren't going to be successful in groups. And guess what college is. 90% group work.

However, there's some really fascinating research, I'd be happy to send this to you, on how moderate amounts of narcissism actually do predict leader effectiveness. And we always like to talk about Steve Jobs as the example. But I'm sure that you can sort of see this either, if you talk– your interactions with students directly or if you're in the classroom you can absolutely see how a student, who's maybe, you know, mildly attention seeking and some of the other parts of narcissism (that's not a variable I study often, but I do some of that research) that they actually emerged as the leaders in their groups and that they can really rally people sometimes. So offering that sort of a positive spin on narcissism.

Another one that we see predict things like counter productivity, absenteeism, withdrawal are things like negative affectivity, which is just a fancy fun way of saying bad attitude. We can measure that and it does tend to predict... oddly. So we're not talking about, "hey this student had a crummy day," we're saying, you know, on average, they have, you know, they are– they have a negative affective trait. So over time, they tend to spin things negatively. Think of things negatively. This tends to reduce their self efficacy or their ability to really  respond to challenges, and therefore we would likely see that they would not be as successful.

But that, I think, a really good example of one where offering opportunities early to build self efficacy, which is, of course, you know, the ability that your belief that you can handle the challenges that come at you. If we can offer some of those things early before, you know, before they come to campus or when they're early on campus and introduce them into clubs, to find them social support. That would be an example where we can we can see it, we can identify it, and then we can offer resources to them.

Will Rose: Great. Another question: Is there a way to find out if our human decisions in the past were biased? So it is something you touched on a little bit as well.

Emily Campion: Yeah! And so, if, for example, you're looking at– you have data on human decisions, historically, you can absolutely do analyses to see, you know, if you didn't– if those decisions were bias. Now when I say "bias," I think I'm speaking quite broadly, but I'm more narrowly if you and I were sitting, you know, on a Zoom call together and talking about what you were interested in looking at, it would be "what subgroup differences are you seeing and how did those how are, you know, maybe, how are those occurring?" So do you see that, you know, you're admitting more men or more women or non binary or, you know, into a certain program or admitting them generally more or less often, of course, by race your ethnicity, as well? You can do these subgroup differences, simply to see if there is an even distribution, which– because the United States is is not evenly distributed by race or gender, it would be an argument of using adverse impact, and I'm going to get a little bit technical here, there is a formula for that. It's quite simple, and I'd be happy to send more information on that. The straightforward, very professorial there, the straightforward answer is "yes," you can do analyses to see if you do have subgroup differences in your historic decisions.

Will Rose: Great, so this is next question actually something that I can answer.

Emily Campion: Great!

Will Rose: If we use the common app, can we still analyze our data with AI. The answer to that is "absolutely." So, you know, and that's something that Student Select helps schools with. We can take that data from within the common app and, you know, perform our analysis on that to, you know, AI methods through kind of standard data science practices and, you know, build these models around that. So absolutely. The short answer is yes, we can certainly do that. The schools can certainly do that, and that's something that we can help with as well.

Emily Campion: And, if I may pop in, I don't think I drove that point home enough. So thank you so much for asking that, and thank you, Will. I– what's really great about AI, and why I love studying natural language processing, is we're just we're setting texts we already have. Which is just so fantastic. Which means we don't have to generate new stuff. And as admissions officers I'm sure you're very well aware that actually changing the process is difficult, but also gathering new stuff takes time and deciding what to gather. And as a researcher, data collections exhausting. And so, if we can use our archival data, all the better. And that's a real big value of using AI, is we can we can use data we already have.

Will Rose: This next question is actually a pretty important one: what worries you about  about using AI in admissions? Anything specifically, Emily.

Emily Campion: Yeah. I– So I love this question. I love talking to people about this topic because, admittedly, there are things that, even though I study this, there are things I don't know, but I do know how to figure out the answers. So if we want to understand, for example, take that subgroup example someone was asking, can we look at our historic data. We can run analyses on the submission materials, and we can actually pretty much pinpoint where the bias was coming from. Remember the proxy example. So we may be able to figure out if there are patterns in the data, where people are consistently being rejected for a certain reason. And that reason will emerge in the data somewhere, probably, and so I– when using this area, it's an area I know this doesn't worry me that there are two things that worry me at the outset.

First, replacing human decisions. I think the reaction of students and the reaction of applicants to this can't be understated and needs to be researched more. There's not a ton of research on it, because not a lot of people do it. So finding, you know, collecting good data is hard. So thinking that this is a replacement for humans at this stage in the research is, I think, concerning. And I hope everyone walks away thinking to themselves, "AI is a tool for me to use." It's not, you know, this is supposed to aid in my decision making, it's not supposed to replace me.

And the second thing is, I really worry about ones that I don't understand how to do. So things like facial analysis? I don't understand how to do it, so that one i'd rather we don't use on students. So those are the things that bugged me, but I think the popular rhetoric around facial analysis has pretty much died down because there have been such an– important reactions to it. So those are the ones that concern me. The second one doesn't really concern me because I don't think that's coming into to higher education anytime soon, but thinking we can replace humans with this right now is premature.

William Rose: Yeah. Absolutely. I think that's something that Student Select completely agrees with, and, you know, at least from our perspective, it's not about automating your admissions process, right? It's just about providing you with more data and more tools to be able to streamline, to make things more efficient, and to, you know, incrementally help the process. So I think that that was a great question and great answer, Emily.

This next question is as another one that I can answer myself. So does it take a long time to get this analysis done on our applicants?

So it– so that's a little bit of a tricky question. It really depends on what kind of resources you have available to you. Is it something you want to do in house? You know, you'd probably need a team that can do data analysis, and, you know, be able to extract that and build models.

Will Rose: You know, in respect to working with an organization like ours, at Student Select, that's, you know it's a pretty quick process. I mean, this isn't a long term, you know, science project, right? It's really– we have a team that knows how to approach this model building, understanding the data. So, you know, the initial stages of understanding the data and building a model, we're talking, you know, days. We're not talking weeks or months. And then you know, once we kind of understand historically what, you know, has, you know, the the data from the common app information, we understand that we build the models around it. And then processing, you know, new applicants moving forward is a very quick process. You'll be able to get those advanced analytics and the scoring in those recommendations relatively quickly. So hopefully that answered your question.

Emily Campion: Yeah. I'll amend as well, if that's al– not amend, I'll add, Will, yeah, some of these initial analysis, they don't take very long, you know, a few days, like he said, if your model is built. A few weeks maybe, just to ensure that that's sort of me hedging as a researcher, it's going to take me a couple days and it takes me a couple weeks. But if you're building a model fresh and you need to strengthen your criteria, so if you need to rescore your data, because you find that historic decisions do show subgroup differences, that can take a little bit longer. But again, you're using this model over and over again, and so this isn't necessarily a one time use. You're putting in all of this work so that could lengthen the amount of time to maybe a few weeks or a couple months, depending on how how much data you have.

Will Rose: Great. Another question just came in. The question is: I know this might be bordering on an OCR, optical character recognition, but any ideas on how AI could be used to read transcripts, grades, rigor, trends? Any thoughts on that, Emily?

Emily Campion: Say that one more time.

Will Rose: Sure, so any idea on how AI could be used to read transcripts, grades, rigor, trends? You know, I guess from– if I can just jump into start, you know, having, you know, with natural language processing and the kind of techniques that are available to us today, this is something that is completely, you know, is being done currently. You know, one, just an example would be like resume parsing, right? Like taking a resume and understanding the pieces of it, comparing that to like a job posting. So, in terms of reading like transcripts and grades, I mean, you know, that on a very basic level is something that is currently, you know, available to us from a technology perspective.

Emily Campion: Yeah, I can't say that this is the space I have often gone into. If we've used resumes, I'm thinking back if we've used this information, it's already been in a form where we're able to read it into our software program immediately. So actually that's not one that I've dealt with directly. So I apologize. I actually can't– I don't want to comment beyond my expertise. I'm very hesitant to do that, so I apologize. But I can certainly look some things up, if you like, and send them your way. I don't like not having an answer for you.

Will Rose: Yeah, and that's something we can certainly revisit, and we can reach out to the person that asked the question directly, as well as some follow up so... But thanks for asking a question.

Emily Campion: Yeah, really.

Will Rose: And that was the last question that we had and the– that was waiting for us. No one else has any questions we might be wrapping up here. I just– I appreciate everyone taking the time to join us today, and we hope you have a great rest of the week.

Emily Campion: Yeah. Thank you so much.

Mike Sisson | AACRAO: Alright. Bye, everybody. Have a good one.

From chatbots to analyzing applications, Dr. Campion breaks down the basics of what AI can do, as well as ways it can be and currently is being used in the field of higher education admissions.

Transcript
checkmark

Sign up for our newsletter to be notified of upcoming Student Select AI events and learning opportunities through AACRAO and CGS.