LA Colloquium 2020

Grade Expectations in Introductory Courses and the Effects on the Corresponding Majors - Paul Graf

Graf explored the probability of students remaining in economics after choosing economics and found gender and ethnicity had significant effects on the likelihood of remaining in the economics major.

Learn more about Paul Graf

Description of the video:

Greetings, my name is Paul Graf and I'm a senior lecturer in the Economics department at Indiana University. Thank you very much for your time. First, I'd like to thank the Learning Analytics Fellows program funded by the Center for Learning Analytics and Student Success, Dennis Groth, the former Vice Provost of Undergraduate Education, George Rehrey, the founding director of CLASS, Harsha Manjunath and Linda Shepard at IU BAR and Nikita Lopatin, professor at Ashland University. Last year, Dr. Gerhard glom and I explored the probability of students remaining in economics after choosing economics or checking the box on their applications to Indiana University. We found gender, ethnicity, in large sections had significant effects on the likelihood of remaining in the economics major. Based on these results, Dr. glom suggested expanding the analysis to include other majors. Often, students declare a major upon arriving at IU Bloomington, which may change from when they first applied. Since students may change their major for different reasons, like their experience taking their first major course. After completing this course, students may switch for two reasons, ability or preference, or the grade they received in this course. This project focuses on the latter reason. So I believe this project may have important admission and retention policy implications that may improve these figures and therefore, the student experience at IU. This project analyzes three effects on the probability a student stays in their declared major. One: prior knowledge measured by SAT scores and high school GPA. Two: experience, represented by the first major course grade and overall student performance that semester, excluding the first major course grade designated as GPAO. 3: Other qualitative factors like gender, ethnicity, residency, and financial need. After exploring different major enrollments, over the last five years, I chose economics, business economics and public policy or BEP, accounting, finance, biology and psychology. Next, I identified the introductory courses for these majors. The introductory course chosen was E21 for economics and BEP, A100 for accounting and finance, L111 for biology and P101 for psychology. Further studying each major, I identified specific qualitative variables to account for program-specific variation. I used a binary model which I estimate using the linear probability OLS method, and for robustness, I also used probit and logistic regression. To compare all six majors using only quantitative variables, I first ran the regression of the likelihood a student remains in their declared major one year after taking the introductory course against four qualitative variables: high-school GPA, SAT score, the course grade, and that term's GPA, excluding the course grade. Next, to model major specific variation, I ran, similar regressions including identified qualitative variables specific to the major. Due to their similarity, I compared economics and BEP, identifying the qualitative variables of female Indiana Resident and first-generation based on the descriptive statistics. For economics and BEP using E201 as the first major course, I found for economics, the quantitative variables were not statistically significant. Indiana residents were, on average, more likely to remain, but first generation students were more likely to switch out of Economics one year after taking E201 For BEP, the higher the students high-school GPA and SAT scores on average, the more likely they remain in the BEP major one year after taking E201. Other qualitative variables are not statistically significant. Next, I compared accounting and finance and found similar patterns in their Descriptive Statistics. Specifically, I identified the qualitative variables of female, resident, first-generation, Asian, Hispanic, Latino, and Black African-American. Using A100 as the introductory course, I found all four quantitative variables on average, having a statistically significant effect of remaining in the declared major. The higher these values, the more likely the student remains in their declared major after one year of taking A100. For accounting, on average, Indiana residents are more likely to switch, while Black African-Americans are significantly more likely to stay one year after taking A100. For finance, on average, female and Indiana resident students are more likely to switch, while Black, African-American, and Hispanic-Latino students are more likely to stay one year after taking A100. Finally, I compared biology and psychology due to their popularity. I identified male, non-resident, first-generation, Asian, Hispanic, Latino, and Black African-American as the qualitative variables. Running the regressions for these two majors using L111 and P101 as the first course for biology and psychology respectively, I found all but a student's high-school GPA has a statistically significant effect on remaining in both majors one year after taking the first major course. On average, males, Asians and Hispanic and Latinos are more likely to remain in the biology major one year after taking L111. On average, Hispanic, Latino, and Black African American students are more likely to remain in psychology one year after taking P101. To summarize, the first major course grade has a significant effect on the most popular majors. Prior knowledge matters for students remaining in their declared majors. A students performance in other courses taken during the first major course semester, Gender and Residency has varying effects. Except for economics, first-generation has no statistically significant effects. Finally, minorities are more likely to remain in their declared major one year after taking the first major course. Further considerations, I would like to explore additional majors, generalize the results at the university, college and major levels to help students stay in the major and potentially increase the diversity and inclusion goals at IU. Furthermore, I'd like to look at the variables of grade penalty and interactive effects on changing majors. Thank you very much for your time.

Investigating an Online Course Feature and Instructor (Re) Positioning for Equity using Social Learning Analytics - Dan Hickey and Joshua Quick

Hickey and Quick used social learning analytic methods to examine a online graduate education course to support engagement and learning for students who find themselves minoritized by the composition of classes and/or the disciplinary knowledge in those classes.

Learn more about Daniel Hickey

Learn more about Joshua Quick

Description of the video:

Hello everyone. I'm Dan Hickey. I'm here with Joshua Quick from the Learning Sciences program. We're going to talk about a course feature and some analytics that's currently underway. First of course, I'm going to say thanks to the CLASS for supporting this work, and George Rehrey for all his support over the years. Say thanks to some awesome graduate students and colleagues, particularly Suraj Uttamchandani who was instrumental in the work we're going to be talking about today. So our work has been deeply informed by the work of the late RandI Engle, in key respects, we've really picked up where she left off when she passed away unfortunately in 2012. This notion of productive disciplinary engagement is probably one of the most useful and used or set of design principles to come out of situative theories of learning. More recently we've embraced, or the last thing she published before she died was this notion of expansive framing. It's about pushing students to find connections with people, places, topics, and times. The goal with all of our work is to position students as authors rather than consumers of disciplinary knowledge and to hold them accountable to disciplinary discourse. So the research context we're going to talk about is a course that I've been teaching for 10 years online. It's really been the test bed for many of the ideas in my work in this framework we call participatory learning and assessment. What are the key things about this course? Well the instructor is pretty busy, I travel a lot in the summer. Sometimes, and I often teach this course in a compressed schedule, which means it's hard for me sometimes to, to get to class for several days at a time. So it's very important that the class sort of almost run itself when it needs to. I spend almost all of my time providing public feedback to the students and very, very little time engaging in private grading and private work. The point here is that the amount of time I'm able to spend is spent positioning students as authors in the course. One of the real goals of a lot of this work goes is avoiding instructor burnout by minimizing grading and private feedback. We do this using the thing we call Wikifolios, these are public artifacts of the class. All of the work happens either on these artifacts and then unthreaded discussions and student pages in Canvas. This is actually a new discussion posts students post each week. So much of our work involves looking at the threaded comments that students post on their work. What we're going to focus on today is a new feature that we added, and I'll provide the rationale. Each assignment includes a private self-assessment. Each assignment concludes with public reflections, these are most important part of our talk today. These reflections are summative for engagement and we can use them for awarding completion points, but they're formative in that they shape engagement proleptically, because students know that they're going to have to reflect on the contextual, collaborative, consequential, and conceptual engagement. These serve to further position students as authors. These are posted publicly. I'm able to comment on those reflections. There's also an exam that automatically automated and time limited to speed things along even more. Here are the four reflection prompt that we had in 2018. Now, what I want to talk about today, and one of the things I want to talk about today is recent critique of the PDE framework by Agarwal and Sengupta-Irving. They can see the PDE might help group-based inequities, compared to traditional curriculum, like problematizing content from your own perspectives allows for cultural meaning, explanation of that content. But they said that doing so without attending to power and privilege probably won't make any difference because of the way the minority students get positioned out of discourse in classrooms. Particularly they argued the problematizing content in ways that challenge culturally dominant ways of knowing can actually lead to racialized controversies. So they advance this framework called CPDE and suggested that instructors should reposition minoritized students proceed to as low status, and they introduce these four new CPDE principles, and central to them is this idea that you use sociopolitical uncertainties to help problematize disciplinary knowledge in your courses. the way that we respond to this in P507 was in 2019, we added a cultural reflection to the four existing reflections. You can see it there. Now we thought that the modest nature of this reflection would be hopeful because it might appeal to adjuncts who have limited curricular control, and it might sidestep some of the pushback from more explicit approaches to equity, but I really was interested in can it catalyze larger changes via instructor positioning and repositioning via instructor comments. So I was very much looking for and encouraging students to use sociopolitical controversies, and I look for opportunities to reposition students who might be in the minority in the class. The way we studied this was we compared the 2018 and 2019 courses. We coded the weekly wikifolios for whether or not they use sociopolitical controversies, and we did thematic analysis of anonymous course evaluations across two courses. Then in the 2019 course, we did thematic analysis of the actual content of the cultural reflections and interpretive analyses of social learning analytics and instructor repositioning and that work is continuing. One of the things we found was that a remarkable increase in the number of sociopolitical controversies. Basically, there were a handful of them that you see, there were 11% out of 230. They were almost all associated with reliability and fairness and bias and standardized testing, those are the only chapters that introduced sociopolitical content. What we see with the addition of the reflections is this use of sociopolitical controversies. Again, we didn't code the, this is not including the content of the cultural reflection, this is everything else. So we were pleased to see that, that we had the desired impact there. The, the themes of the sociopolitical controversies, nearly half of them raise assessment bias, but what's interesting is half of the considerations of bias, were outside of this session of bias. That was really important because what it meant was students, for instance, during formative assessment, were appreciating that bias could be a problem there if they weren't careful. So we compared the course evaluations. What was interesting was that there was a politically conservative student in each of the sections, but the responses were very different. In 2018, the student complain quite bitterly about being silenced. And in 2019, the student self-identified and expressed appreciation of the ability to reflect on these issues. There you see the 2018, "way too philosophical and political, I had less and less trust in either the instructor... I will not take another course by this instructor." Really quite different than what we saw in 2019. "I introduced my views, I was met with objectivity and politeness." And as I'll show you from the comment, you'll see why I was encouraged to. "It was encouraging to me that my voice was a valuable contributor." I'm going to come back to that. So when we did the Content Analysis, 35, 18% of those really got at something that's really important to me was that they surface a sort of implicit bias that critical teacher educators have long pointed to as a source of inequity. So there you see one example on the reliability and fairness, but here you see another one on formative assessment is what I was getting at earlier. "Perhaps it is my own background as a straight white male to cause me to remember this only after I had completed most of the assignment." So this is what I mean by, right? And this is an example by the way, of prolapses not working, right? So that's some evidence saying that maybe we needed to be more explicit about it or that it that it kicked in at some point. That's debatable. This, the kind of thing that we're interested in studying much more carefully. And here you see the 2019. Now, I don't have time to really get into this, but what the student did was essentially reject the chapter's discussion of assessment by us and say that, you know, life is not fair. And here's my feedback, right? This is really hasty on my part in retrospect, it was a pretty egregious mistake. I said, you know, basically bravo to you for expressing this. I was sitting in a Refugio in the Alps with a line of people waiting for me to get to the satellite access, and really quite hasty. In the end, I was quite embarrassed that the way that I tacitly endorses students characterization of assessment bias as being politically correct. And really more importantly, we're really blistering critique from my colleagues with expertise in diversity in several tacit assumptions as a reflection in particular that I'm requiring someone rather than inviting them to speak for their group. There's potential for stereotype threat. I might have done more harm than good. We made some revisions to this. We added it later in the semester when we prepared to discuss it. I'm not going to talk about that today. Instead, I want to shift and turn this over to Joshua because we decided to hold off. He wanted, decided to use this data. It was an ideal dataset for his early inquiry project, he's currently in the process of revising his proposal. It's a fairly formal process into dissertations, level work and many other programs. So I'm going to turn this over to Joshua. Joshua taken away. Thank you Dan. So as Dan mentioned what we're really interested in here is how students are positioning themselves as authors and developing these sort of expansive and contextualized frames. So what we're going to be doing is developing a coded data set that articulates the extent to which each student did this within the wikifolios, the reflections and of course the discussions themselves. And we'll also be looking at how the instructor repositions students via discussion comments in that sense, and paying particular attention to the minority students, such as they are within these courses. Next slide. You can go ahead for them. As you can see from this graph, we do have some reason to think that there might be something to this process of reflection. So on the bottom of the x axis, or just the occurrence of sociopolitical controversy themes within a wikifolio. That's not the total themes that occurred, it's just whether they occurred or not across the students wikifolios. And on the y-axis we have proxy for engagement of how deep the students got into the discussions just by number of sentences that occurred. There was some correlation though it wasn't significant, but we would expect that the more engaged students are, the students that participated more effectively as experts and, and authors of content to engage in these controversies may also perform better on these assessments. So that's what we'll be looking at next. And I think you went back there, Dan. The way that we're going to do that, well, initially wanted to look at the units and data platform and we were intending to do that for this work. But what we found is that through the process of how it transforms data and cleans it. This course wasn't adjusted yet, so we had to go back to the older Canvas data, and do all the cleaning ourselves, so that put us behind a bit. But as I mentioned, we'll be using these codes and then looking at the associations between codes within a technique called epistemic network analysis. So on, on the right of the slide you can see an example of that, and this is just a student's first wikifolio. As you can see, they, they tended to have stronger associations between places and times pass. And then they, they held themselves and others held themselves accountable through that sort of articulation. What's neat about this is that we can actually develop a numeric representation of how the student contextualize through sort of the, the mean of these associative codes. We're going to be using that to do comparative work between how the learners performed over time as well as within the exams themselves. So hopefully we'll have more to report on that soon and I'll turn it back to Dan. Thank you Joshua. And so one of the things we're really excited about is we do have exam data in this course. One of the things I'm focusing on in this work quite a bit is looking as Joshua, the analyses Joshua is doing. We're going to basically look at all of these different factors and see how they relate to exam performance, because our exams were very carefully constructed to be, we never teach to them, right? So they're arguably they're estimates of the transferability of that knowledge to subsequent setting. So we think that that's going to be, that we also have a survey that we've developed with the support of the subtle grant that we're also looking at right now and we're proposing that as an alternative to the more widely used community of inquiry survey, which has never been shown to be related to learning outcomes. So with that, we'll stop and ask for, we'll take questions. I'm gonna leave this slide up while we take questions, but for now, we're just going to give it a wrap on this talk. So thanks for coming, everybody.

Means and Ways: Interaction of Family Income and Gender in Academic Field Selection and Persistence - Michael Kaganovich, Morgan Taylor, and Ruli Xiao

Kaganovich et al. focus on how tuition cost differences for in-state and out-of-state domestic students and their family economic status, represented by residential zip-codes, influence enrollment and attrition decisions.

Learn more about Michael Kaganovich

Learn more about Morgan Taylor

Learn more about Ruli Xiao

Description of the video:

Hi, my name is Michael Kaganovich. This project is joined with my colleague in economics, Ruli Xiao, and our recent doctoral student, Morgan Taylor, who defended and started at the University of Georgia this Fall. Here's our motivation for this project. As we know individual's decision whether to go to college is a major determinant of their lifetime income. In recent decades, however, it's become evident that what one does in college, namely the major one graduates and has statistically and even greater impact on lifetime income. It's also been a common belief that access to college education is a great social equalizer. So with the growing access to college, one might expect an overall more equal income distribution. There is growing evidence that this is not quite so. The richness of Indiana University Learning Analytics dataset, especially in terms of students choices of, and persistence in academic fields, gives us an opportunity to look at the means and ways as the title suggests, and to see how they form more effects. We find in this project with strong positive correlation between student family income and students likelihood to choose and then graduate from a more lucrative academic discipline. We find this to be true for both men and women. One of our biggest surprises is that students from lower-income groups demonstrate very high, in fact the highest likelihood of choosing the STEM discipline. However, their persistence in these fields is also the weakest, which results in lower incidents of graduation of low-income students in STEM. Another big surprise is a strong gender gap in favor of men in selecting Business and Economics as an initial economic field, which is further exacerbated by women's weaker persistence there. This gap only gets larger as one moves into higher income quarters. So our overall conclusion would be strong evidence, contrary to the belief in college as a great social equalizer. Here's how we construct information on student incomes. We have the info on student family residential zip codes and obtain US Census Bureau's data on median incomes associated with those zip codes, which we then use as proxies for student family income. We then arrange these this distribution into quartiles, from the first, the lowest to the fourth the highest. We break all IUB Majors into five broad academic categories, STEM, Business and Economics, Social Sciences and Humanities, other Professional Schools, and Education as a separate category. We look at each student's initial choice of academic category as the final point of academic career at IU. The academic category in which they graduate or dropping out as a distinct outcome. One big surprise is how strongly the choice of business and economics as a starting discipline is skewed to students from higher income backgrounds. This table, Table 3, which presents just descriptive statistics. In other words, not controlled for students characteristics other than income and gender shows, first of all, that the distribution of men and women at IU is not balanced to gross income quartiles. As you can see, women overall, i.e., in total are more prevalent in the lower income brackets, not so in the high-income quartiles. Another striking thing in this table is the strong gender imbalance in favor of men in the choice of Business and Economics, especially amongst students from high income brackets. We then use regression analysis to predict the likelihood of choosing particularly initial academic categories, and then apply multinomial logistic regression models in order to predict the probability of the system, there are not controlling for the initial choice, and other student characteristics, including gender, income, other social, demographic, and academic learning. Our results are summarized in two tables. In this Table 5, we present the probability of choosing an initial academic category. As we can see, much to our surprise, that for both men and women, the probability of choosing STEM is much higher for students from low-income quartiles than in the high income. and the situation strikingly opposite in Business and Economics. In other Stock result, perhaps less surprising, is how stable across income quartiles is the likelihood of choosing Social Sciences and Humanities among women, but how the pitch is different. Now this final table concerns IU student's final outcomes. It shows how much the likelihood of success of graduating from an initial academic field grows with student family income for both men and women. Another important finding is a striking gender gap between men and women in choosing Business and Economics, which grows with family income. A further important finding is that despite showing high initial interest in pursuing STEM, students from low-income families, a lot less likely to graduate from STEM. We observe a similar phenomenon in Business and Economics. One interesting theory that is consistent with these results may help explain them. Consensus, so-called thing income related information gap, which I hope to be able to talk about during the question and comment. So our overall conclusion is that students of both genders from low-income backgrounds are less likely to graduate from more lucrative academic fields, more likely to drop out of IU. Whereas students from high income backgrounds are more likely to graduate and to graduate in fields that offer higher monetary rewards. Thank you.

IUB Finite Mathematics and Retention: Comparing Preparatory Mathematics Courses vs. Co-Curricular Mathematics Support Courses - Dr. Andrew M. Koke et al.

Koke et al. determine the extent to which the preparatory mathematics courses lead to attrition and the extent to which the X101 finite help class positively impacts student performance in finite.

Learn more about Andrew Koke

Learn more about Molly Burke

Learn more about Anthony Guest-Scott

Description of the video:

Hello and welcome to IUB Finite Mathematics and Retention: Comparing Preparatory Mathematics Courses versus Co-Curricular Mathematics Support Courses. I'm Dr. Andrew M Koke. I am joined by Sharon Hoffman, Dr. Molly Berke-Leon, and Dr. Anthony Guest-Scott. All of us work at the Student Academic Center. The question that we're actually wanting to ask first at for this Learning Analytics grant is about the math requirement at Indiana University Bloomington. Most students will satisfy the math modeling requirement by taking one of three classes, either calculus, M211, finite math, M118, or finite math, D117. Now which of these classes they go into is largely dependent upon their ALEKS score. ALEKS is a math placement exam. It's used across our nation and it ranks students math proficiency on a score of 1 to 100. 75 or higher, they can go into calculus, 50 and higher, 50 to 74, they could go into finite math M118, and actually finite math M118 is the largest class on our campus with around 4,000 to 5,000 freshman and other students taking that course every year. The question is, and things get more complicated whenever students do not perform that well on the ALEKS placement exam. If they are 35 to 49, they will often take the two semester version of finite, where they take math D116 in one semester, and Math D117 in the second semester. And if it's below 34 or 34 lower on the ALEKS placement, then there's a host of other classes that they might take in preparation for finally getting into math. D116 and D117. So the weaker your ALEKS Math Placement Exam, the more math you need to take. That kind of makes intuitive sense, right? Like students who aren't well-prepared for finite should maybe take other prep classes in order to be ready. But the unintended consequence of that is that some students who don't particularly like math and aren't particularly good at math, might take four or five math classes in order to meet the math modeling requirement. Each one of those takes a semester. So it's quite possible for students to spend two years accomplishing their math modelling requirement. We wondered, our first question, whether students who have an ALEKS score below 35 leave the university before graduating, and we actually studied for the past several years, since 2009, all the students who took math classes on our campus who had an ALEKS score below 35. What we found is that in fact, a lot of students will drop out of IU as they face this math hurdle. And you can imagine why if you're not particularly good at math, if you don't really like math and you see that there are four semesters of math ahead of you, and if one of those semesters goes poorly, you might think to yourself, this isn't the school for me. What we found, the most startling result we found is that there is a very high proportion of students who will drop out of IU instead of overcoming that math hurdle. If the student is an under-represented minority, they're more likely to drop out or if their Pell eligible, they're more likely to drop out, and if they're first-generation, they're more likely to drop out as well. So our current system of taking these preparatory classes isn't necessarily the best one. There was a possibly another way forward. Since 2013 at City University of New York, some scholars had been tracking their population of students who also need to take preparatory math courses. What they found was that these students actually will do better, not just in the math required course, but also on all future math courses, If instead of taking the math prep courses, they simply enroll in the required course, but partner that with a co-requisite math support class. The Student Academic Center does have just such a co-curricular math support class. It's education X101. We're wondering could we help IUB students skip those prep classes and actually get across the finish line if we partner them more closely with education X101. That led us to our second question, does our education X101, finite help class actually improve student grades in finite? What we found is that yes, it does. Overall, students who take education X101 get fewer F and W grades and more B, C and D grades in finite, and that actually goes up if they are Pell eligible, an underrepresented minority, and also, or also a first-generation student. That leads us then to our next step. So we know that many students take the prep classes and drop out of IU, and we know that at City University in New York, there was an alternate model, and we also have a successful co-curricular class, education X101. Therefore, we propose in fall 2021, partnering with IUB's Groups Scholars Program, and choosing 40 students who score below a 35 and would normally be taking those math prep courses. Instead putting them in math, D116 and Education X101 at the same time and getting them across the math hurdle in two semesters. We think, we think the tools are in place for these students to have success in two semesters, in what would normally take four semesters. So stay tuned next year, another Learning Analytics grant, and we're going to study this in our pilot program. Thanks very much everyone.

Factors Determining Student Success in Downstream Courses from Business Statistics - Nastassia Krukava and John Stone

This project aims to address questions related to student performance in down-stream courses from introductory Business Statistics at Indiana University (IU).

Learn more about Nastassia Krukava

Learn more about John Stone

Description of the video:

It is our greatest pleasure to present the results of a new learning analytics project that looks at the factors determining student success in downstream courses from business statistics. This project is a joint work performed by learning analytics fellows Natassia Krukava and John Stone presented by Nastassia Krukava in this video. Within this project, we aim to look at several questions related to student's performance in introductory business statistics courses at IU. I'd like to start with some background information to emphasize why the questions we raised in this project are important. So at IU, there are two main business statistics courses, Economics, E370, and statistics, S301. These courses are substitutable for most students required to complete a business statistics curriculum. Both are similar in content and require the same prerequisite, finite mathematics, or so-called M118 course. Since students have two different options, E370, and S301 for completing their business statistics requirement, and these options are very similar from students perspective. We'd like to ask whether these courses equally prepare students for their downstream courses. In other words, we try to understand whether the students who take E370, and S301 perform equally well or differently in their subsequent courses. We will get two downstream courses, business G350, which stands for a business econometrics and business M346, which stands for analysis of marketing data. These courses heavily rely on the knowledge of statistical tools taught in introductory business statistics, but in the short video, we only present the findings related to G350, noting that our results for M346 are not drastically different. We also look at a related question and try to understand the effect of transfer credit for finite mathematics, M118 from community colleges on students performance in G350 and M346. The main puzzle here is that both business statistics courses require M118 that many students choose to take at a local community college, presumably because it is easier to earn higher course GPA, but whether community colleges prepare students well for their future courses is not immediately clear. So a related question is whether the students who fulfill M118 requirement at a local community college perform differently in their downstream courses like G350 and M346. Let me jump right away into the methodology we use for analysis of our first question that looks at the differences in the performance in downstream courses for students taking different business statistics options. As I already mentioned in this video, we present our findings for one downstream course G350. our baseline population regression model to analyze the relationship between G350 grade in business statistics course completed by a student is presented here on the slide, where we use a student GPA grade in G350 as a left-hand side variable that depends on a dummy variable for business statistics, denoted by E370, and other controls denoted by W. This factor W includes students observable characteristics such as high-school GPA, SAT score, indicators for Pell Grant recipients and others. Our main variable of interest in this model is the dummy for business statistics course. This E370 variable, that takes value of one if the student took E370, and 0, if the student completed S301. The estimated coefficient of this variable will correspond to the effect of taking E370 on students grade in G350 as opposed to taking S301. To understand the effect M118 taken at a local community college, our second question will regress students grades in G350 on the same set of controls included in W, and our main variable of interest denoted by M118_ LCC. This variable is the dummy equal to 1 if the students transferred M118 from a local community college, and is equal to 0 if the student completed M118 at IU. So the estimated coefficient of this variable will inform us if students who take M118 at a local community college tend to do worse or possibly batter in G350 compared to those who take M118 at IU. Results of the estimation are displayed in this table. The first column shows estimates of the coefficients from regress in G350 grade on E370 dummy and other controls. The estimate of the coefficient on E370 variable is negative and indicates that a student who took E370 earns a GPA grade in G350 that is almost 0.04 points lower on average than the measurably comparable student who took S301. However, this point estimate is not statistically significant. So we cannot rule out that students who took E370 and S301 perform equally well in G350. The confidence interval for the estimate of this graph is between -0.13 and 0.05. So we also cannot rule out small differences in students performance in G350 that can be attributed to their business statistics course. But again, given that the bounds of the interval are quite narrow around 0, we are talking about a possibility of small differences. The second column of this table presents estimates of the coefficients for the second model where we regress G50 grade on an indicator variable for M118 taken at a local community college and other controls. The estimated coefficient on our mean variable shows that a student who completed M118 at a local community college earns a GPA grade in G350 that is 0.11 points lower on average than a measurably comparable student who completed in M118 at IU. These estimate is also not statistically significant, so we cannot rule out the possibility that taking M118 at a local community college does not affect students performance in G350. But unlike the coefficient on E370 in the first model, this estimate has a confidence interval ranging from -0.28 to 0.06 with more pronounced negative value for the lower bound. So we cannot rule out the possibility of modest negative effect of taking M118 at a local community college on students performance in G350. As a concluding remark, I'd like to say that we conducted a similar analysis of the effect of taking E370 and S301 in the effect of taking M118 at a local community college on students performance in M346. We used the same methodology as demonstrated in this video for G350, and we obtained similar results. To wind up, I'd like to thank you for taking interest in our findings. Your questions and suggestions are very welcome.

What drives student major change - Jie Li

Li identifies required Finance, Accounting and Marketing major courses and studied their enrollment changes in response to the trend changes of major declaration before addressing factors that directly or indirectly affect the enrollment of a particular course and experiment with predictive data models.

Learn more about Jie Li

Description of the video:

Hello. My name's Jie Li and I'm from Kelly School of Business. I'm here to talk about my research subject in 2020, the drivers behind student major change behaviors. The data is from Bloomington undergraduate students enrolled between Fall 2006 and Spring 2019. The data include student demographic information, their admission information, and their academic performance information including courses they take each term and the GPA, and the credits earned. If students have graduated, it also provides their degree earned, the time to graduation, etc. This is a continuation from my research in 2019. My focus is narrowed to only students in business school, and I found last year that over 22,000 students enrolled in business undergraduate program. Among them 12,000 have graduated with at least one business degrees. I split the students into two groups, major sticklers and major switchers. Major sticklers are those students who have earned the degree that they declared at the time of admission, and about 55% of students fall into this category. The other category are students that did not earn the first-degree declared at the time of admission. This year, I further narrowed my focus on five majors. Three traditional majors: accounting, finance and marketing and two non-traditional majors: information system and supply chain. I looked at three common course groups. The first group is I-Core, which covers the major business subjects. I also have a group of courses that teaches students quantitative and technical skills. Soft skill skill courses are courses that train students with their presentation, communication, and career development skills. First, I looked at the switchers and sticklers and no significant academic performance difference have been identified between them. But I did find some demographic traits that make students more likely to be a stickler, including female international, first-generation, and students admitted through standard admission process. A smaller group of students have identified as late switchers. These are students declare or drop majors on the six terms or later. About 1/5 of students belong to this group. This group does have lower GPA, their demographic traits are somewhat different. So, I try to create a predictive model to identify those students, but the model's performance are too poor to be useful. I did find some notable demographic differences amongst students earning different degrees. The chart on the left is the gender distribution. Finance students have significant lower percentage of female students and marketing have overrepresentation of female students. The broken line in the middle represent the overall female percentage. The chart on the right is the ethnicity percentage distribution across the five degrees. Students graduated with market degree, the significant lower representation of international students among this degree. Students performance in the three common course groups are analyzed. This is the chart of student's average in there I-Core course performance. This is the number of students who took I-Core, these are the overall GPA average of different courses, and these numbers represent the GPA by courses, by different degrees. It's obvious that accounting performed the strongest. I also conducted ANOVA analysis to make sure that the course GPAs can be used to differentiate students earning these five different degrees. It showed that I370, the discussion course, does not have significant difference among students, but all the five other courses can be used. This is an example of a box plot using F370, and these charts are how these courses group the five degrees into different groups. These two charts are student performance in their quantitative and technical courses and in their soft skill courses. In quantitative courses, all of them have differentiating power. Soft skill courses, three of them should not be used, and we can see that accounting students and finance students still are the strongest performers with quantitative and technical courses, but it's very obvious that information system students perform very well in the computer and technology courses. For the soft skill courses, marketing students really shine, they are the best performers and IS students perform the weakest. I-Core give students exposure to major business subjects, and it is hypothesized that I-Core has significant influence on students major decisions, and this is confirmed in this research. The numbers over here indicate how many students declared majors in the I-Core term or in the term immediately after I-Core. Most students take I-Core in term five, and we see the highest number of major declaration in term five, and in term six. In full, students declared new majors, their performance are generally better than the overall average. Among them, the best-performing students declare accounting and finance. The students performing the weakest declaring marketing, IS, in both the term of I-Core and the subsequent term. The number of students declare majors are significant, over 1/5 of students declared new majors during the I-Core term, and 1/3 of students declared new majors in the subsequent of I-Core term. Over 40% of students earning IS and supply chain degrees declared new majors in these two terms. Over 1/4 of students declared the other three majors in these two terms. Even though not as many people dropped majors around the time of I-Core, but we still have significant number of them dropped in term five, terms six, the numbers are the highest. About 9% of them dropped majors during the I-Core term, and about 16% of them dropped majors right after the I-Core term. The highest drop is marketing, and the lowest drop is finance. It's very obvious in both charts, students performs the weakest in I-Core, drop finance. Students perform the best, dropped marketing or information systems or supply chain management. From the study, I have identified some course that can be labeled as major influencers. The first one is F370, this is the finance component of I-Core. If students perform poorly in this course, it's very likely they are going to drop finance. Another course is the operation component of I-Core it's P370. If students performed relatively well in this course, but not as well in other I-Core courses or other common course groups, it's likely that students will choose supply chain management. If students performed very well in the computer and technology courses, but they have weak performance in soft skill courses, they have a higher chance of moving to Information Systems major. If students have strong performance in soft skill courses and their performance in quantitative and technical courses are weak and if they don't perform very well in F370 and P370, then these students have a high inclination to go into marketing. How would these findings impact our decision making? I would suggest that we provide more support and guidance to students who haven't decided their major until their junior term, that we can reduce late major changes, and we should encourage more female students to pursue finance, and we should strengthen quantitative and technical skills for marketing students. We should try to achieve gender balance and diversity in finance and marketing, help international students overcome language barriers and cultural difference. We should help them improve their communication, presentation, and career building skills and include some language and culture neutral contents in marketing curriculum to attract more international students. Thank you.

DataCamp learning platform: reducing skills gaps between courses and job market - Olga Scrivner

Scrivner examines students’ profiles and develop a learners’ classification system to explore patterns in their course trajectory and learning behaviours.

Learn more about Olga Scrivner

Description of the video:

The rationale for this project is the increasing gap in skills in demand between higher educational offering and job market needs, and also the increasing student's enrollment into a Massive Open Online Courses. Massive Open Online Courses facilitate career transition, provide training and practical skills offer instant feedback and also increase student's motivation by implementing game-based approaches. For instance, providing points, rewards, certificates, so even competitions. Data camp is one of the online platforms for learning data science skills. It is one of the most popular platform for data science, which include languages such as Python or SQL. The hypothesis of this project is that by incorporating data camp in our courses, we will increase practical skill training, increase personal motivation for our students, and also providing them with interactive platform, flexible, with instant feedback, and finally, it will help us reduce skill gap. By taking Data Science courses our students expect to obtain skills that can be applied more widely, and also they hope to become more adaptable in a future as the technology continues to disrupt business practices and some of them are changing their careers. To test and evaluate the hypothesis, this project aims to analyze a current job market for data scientists and data science as educational degree. Data scientist as an occupation, will be analyzed for hard skills, technical, and several courses from online data science program taught by the principal investigator for this project will be evaluated in terms of skills they offer in a learner's Profiles, Data can platform will be used to provide coding practice and practical tutorials in a new skill comparison will be made to identify a gap. The project design includes three components: Learners, Data Science courses, and job market from introduction posts collected on Canvas and Piazza, learner's current skills will be manually extracted from PDF files. The skills will be linked to the Course Roster, by username. Then the Course Roster will be link to SIS data using students IDs. Augmented course rosters will be linked to data Camp reports by using students user names, the data will be clean, for instance, removing some users and then learners profile will be analyzed. Skills taught will be compared with Data Scientist job postings. And finally, skill gap will be identified and adjustments to the syllabus will debate. And I'd like to start with a definition for data scientist is a job title. The term data scientist was coined in 2008 by the leads of data in analytics efforts at LinkedIn and Facebook. And it became one of the hottest jobs of the 21st century. The occupational information network, O'NET, describes it, is a job that will transform raw data into meaningful information using data oriented programming languages and visualization software. Data scientists will apply data mining, data modelling, nature language processing, and machine learning. Data science incorporate a variety of hard and soft skills, and this project focuses on hard technical skills. According to labor market analytics, three programming languages, SQL, Python, and R, are one of the most frequent skills present in job postings, advertisement online. However, based on the workforce profile data, which is online resumes, it is noticeable that those skills are less frequently mentioned in user profiles. This took several courses taught between 2018 and 2020 will be evaluated. The current stage, the analysis applies to the first course, introduction to Python, which consisted of 61 students. According to the Python developer survey conducted in 2018, high in demand libraries in Python include NumPy, Pandas, Matplotlib, and many others. What about data camp training that offer in this course, it includes Introduction to NumPy, Introduction to Pandas, and a brief introduction to Matplotlib. Only 3 skills out of 11 skills. Learner's represent uneven gender distribution, with female only 12 students. Secondly, it is interesting to examine learners pathways outside of the assigned courses, students have access to browse and select any courses on the own in addition to assigned data camp training. If we look at female students who chose courses that are not selected by male students, we see that they chose Data Science R Basics, Intermediate R, RBootcamp, and also Data Visualization and Deep Learning in Python. Of course, this is only an interesting observation as this product still in progress. We can share several key takeaways. First, focusing on emplyer demand is important as the program can provide graduates with a clear career pathways, drilling down to skills by understanding the in-demand skills, course descriptions can be tailored to highlight skills taught in the curriculum, which employers are looking for. Finally, new opportunities as we can help drive enrollment and create new online courses practical and relevant for the 21st century.

Improving placement accuracy and efficiency of incoming international students into ESL support courses - Sun-Young Shin

Shin investigates the extent to which TOEFL iBT subskill scores can be comparable to the IAET test scores in terms of placement into various levels of ELIP courses.

Learn more about Sun-Young Shin

Description of the video:

Hi everyone. My name is Sun-Youn Shin and I am an associate professor from the Department of Second Language Studies at IU, and the title of my study is Improving Placement Accuracy and Efficiency of Incoming International Students into ESL Support Courses. ESL here stands for English as a second language. Like all research one university in the US, international students are required to demonstrate their overall English proficiency before they're admitted to IU. However, even though students met this English proficiency requirement, many of them are still found to struggle because of their lack of academic English proficiency, which is critical for their academic success at IU. So to address these, many research one universities, like Big Ten schools offer English support courses and those students need to be identified and supported to improve their academic English proficiency. So in order to identify those students and to place students into academic English support courses appropriately. Many universities, including IU, have been using their own local in-house English placement tests because they're like pre-arrival standardized test scores like IELTS and TOEFL iBT, had been shown not to work for placing students into appropriate courses, because these test scores target like a much wider range of overall language proficiency levels. Plus this test scores are like less informative for placement purposes because they do not represent students most updated academic English language abilities and do not match the curriculum of our ESL support courses. Usually like their tests, TOEFL iBT test scores is like comment or most of time like a year long, course, year long test scores. So using those kind of a standardized pre-arrival English test scores for placement purposes comes with the serious threats to placement of international students into appropriate English support courses, and these facts are like well established in literature. However, recently, as you may know, noticed that due to the COVID-19 pandemic, students are not able to be tested on a campus, and now New Student Orientation is scheduled earlier than before. So there was a need to utilize their existing English scores for placement into English language courses at IU be called the ELIP, English language courses. So this study aimed to address this urgent need, and I try to equate the IU in-house English placement test scores. This task is called Indiana academic English test, I'm going to use it as an abbreviation AET to their existing TOEFL iBT subsection scores, which is this kind of a contingency plan to accommodate assessment during this COVID 19 pandemic only. So for this, I worked with Dr. Fiorini, from the Bloomington Assessment and Research, BAR, to create the cutoff scores based on the TOEFL iBT subject scores to determine the student's placement or exemption from the English language Improvement courses, ELIP courses based on the placement of the past cohorts, which is done by our IAT test scores. Let me give you some background information about our IAT, the Indiana Academic English test. So the IAT was developed for about two years and piloted and rigorously vetted and implement before implementations. So these tests has four subsections. So Section one is a narrative writing. Section two is a reading to write. So these two tests, two subsection of the Academic English Writing, and they'll be given a ten minute break, and then we'll take audio-visual academic English listening tests, which consist of the actual IU introductory video lectures, and then they move on to the oral interview, it's a face-to-face oral interview with two interviewers. So the entire test takes like more or less about two hours, and then based on these test scores, students are placed into three different levels of academic literacy development programs, and based on the listening test scores, they either exempted or placed into one Academic Listening course. Based on their performance on the oral interview, they'll place into pronunciation course and then speaking fluency development courses. The IAT has proven to have relatively high reliability. So inter-rater reliability of writing test scores is about 0.77. The internal consistency, reliability of the listening sections, it's about 0.74. Considering the cutoff point was set for the Academic Listening course placement. Then I checked correlation between the IAT and then TOEFL iBT test scores at the subsection level to establish sort of like concurrent validity of the IAT, and their correlations range from 0.53, which is between the listening and then TOEFL iBT listening score, and then the writing is about 0.84. The speaking has the lowest correlation is about 0.30 to 0.34. So these correlations are statistically significant as you can see from this slide, but they're still too weak to warrant any claim that these two tests provide identical information about students academic proficiency for placement purposes. However, as I mentioned previously, there are new needs for early pre-registration into ELIP courses because at the beginning of the early new student orientations and there's some demands for improving logistical efficiency of placement decisions because students are usually required to take IAT with a very limited window of time after their arrival at IU. So students might have fatigue and jet-lag, which might affect their task performance negatively. Also about 85% of international students have their TOEFL iBT est scores. So, you know, as a contingency plans to deal with the challenges we encountered due to pandemic, I try to equate the IAT scores with a TOEFL iBT scores based on the past cohort data. I use the equipercentile linking method, which has been used extensively in the fill the language testing and educational measurement. So based on this equippercentile linking method, I came up with the cut-off scores based on the TOEFL iBT subsection scores, but unfortunately, we found that considerable misalignment between these two test scores, which suggests that the use of the TOEFL iBT subsection scores are not really reliable for making placement decisions about the students. So the misalignment ranges from 30% and even up to 50%. To make it worse, since fall 2020 Duolingo Test scores are now accepted as a legitimate indicator of required English proficiency, but this Duolingo English test is pretty cheap. Students only pay $49 and they can complete the test at home. They don't have to travel to the testing center, and the tests lasts only one hour, but this tests has been severely criticize because it does not have distinctive subsections for measuring different subskills like other standardized English proficiency tests. Also these tests lacks the validity because it doesn't apparently does not really assess student's academic language ability, discourse level, or interactional competence, which is the core of academic English abilities we require at the higher education context. So considering the previous research and the available data, we analyze, the TOEFL iBT or other standardized test scores, it's not really feasible for placing students into appropriate academic English support courses. So this kind of unreliability of using the TOEFL iBT for placement purposes, making this options is not really acceptable beyond the emergency of this pandemic. To deal with the all this like logistical concerns of using the IAT in its current form. We strongly encourage that to adopt the online test format so students can take this IAT at home before they come to IU, but this is test is high-stakes then. So we have to make sure that we have to use the online proctoring service systems like an Examity. So this kind of practice has been used for many other Big Ten universities this semester when this pandemic hit us. So that the future plan would be that using implementing IAT at home exam format so we could minimize those kind the downsize of IAT administered on campus, but still, we can place the students into appropriate courses in more reliably and valid manner. Ok. So thank you for listening to my talk and if you have any questions or concerns or suggestions, feel free to contact me.