Road Work Ahead

#4 - Mark Johnson: AI in Education & Product Development, LLMs as UI, Actual Demand for AI

Waypost Studio | Sam Gerdt Season 1 Episode 4

When you work and live deeply within the realm of new technologies, it can be easy to forget that most of the world doesn't share that same perspective. Many of us have been spending the last 9 months freaking out about the generative AI arms race and the end of white-collar work, but my conversation this week with Mark Johnson steered my thinking in a very different direction.

Mark is the co-founder and CTO of an online learning platform called Pathwright, and the majority of our conversation was spent talking about AI from the perspective of the end user. The origin of Mark's 11-year-old tech company looks a lot different than we're accustomed to seeing. There was no venture capital or seed round. Instead, Pathwright was funded and built by aligning the development roadmap directly with the specific needs of early adopters.

In the decade since, Pathwright has continued to develop its platform with the mindset of serving actual needs and staying true to its core values. So, I was curious how a founder and CTO like Mark is thinking about the rush in the tech world to implement AI as quickly as possible. Is this something that his customers are demanding, or are technologists putting this pressure on themselves?

Besides this topic, we also talked about the implications of AI in education, the likelihood of an AI bubble, and how to foster an optimistic view of the future in the way you think about and interact with new technology.

Sam Gerdt:

Welcome everybody to Road Work Ahead, a podcast that explores the unmapped future of business and technology. My name is Sam Gerdt and I am your host. When you work and live deeply within the realm of new technologies, it can be easy to forget that the majority of the world does not share that same perspective. Many of us have been spending the last nine months freaking out about the generative AI arms race and the end of white collar work, but my conversation this week with Mark Johnson steered my thinking in a very different direction. Mark is the co-founder and CTO of an online learning platform called Pathwright, and the majority of our conversation was spent talking about AI from the perspective of the end user, the learner.

Sam Gerdt:

The origin of Mark's 11 year old tech company looks a lot different than we're accustomed to seeing. There was no venture capital or seed round. Instead, Pathwright was funded and built through a process of aligning the development roadmap directly with the specific needs of early adopters. In the decades since, Pathwright has continued to develop its platform with the mindset of serving actual needs and staying true to its core values. So I was curious how a founder and CTO like Mark is thinking about the rush in the tech world to implement AI tools as quickly as possible. Is this something that consumers are demanding, or are the technologists putting this pressure on themselves?

Sam Gerdt:

Besides this topic, we also talked about the implications of AI in education, the likelihood of an AI bubble and how to foster an optimistic view of the future in the way you think about and interact with new technology. Mark is a thoughtful business leader and I hope this conversation encourages you like it encouraged me. Mark Pathwright's been around for 11 years now, which is a really long time. It's such a cool platform. I think I remember when you guys launched it and how unique it was then, and I think it continues to be unique now. But I'm curious in 11 years, what are some of the things that you've seen change, maybe on the technological side, but also just philosophically with online education?

Mark Johnson:

When we first started, the landscape of online education tools was not great.

Mark Johnson:

You had tools like Moodle and Blackboard and just user experience was not even a thing in the educational space.

Mark Johnson:

It was more of an administrative experience, I guess you could say, where you just try to check all the feature boxes that the school had and don't worry about the students they're not really the user because they're not paying you money and so that was a particular niche that we tried to fill early on and still are filling.

Mark Johnson:

Actually, I mean the software has. I guess one thing that's changed is the baseline of the software has gotten better. It's gotten a bit simpler, but even so, the most popular learning tools like Canvas and some of the other big ones, while they are more user friendly than Blackboard or Moodle, they're still, I would say, admin first, not student first, and they're still designing for the people who buy, and so that part hasn't changed. Obviously, with technology and mobile phones becoming a huge thing for students and for teachers, that's obviously caused the simple design we chose, with a single path, one screen, to kind of stand that test of time in a sense, where we haven't really had to change things too much and make a completely separate app, like some of our competitors have done.

Sam Gerdt:

So in 11 years, then what are the challenges that I think that have most impacted your business?

Mark Johnson:

Well, I mean, the growth part has always been kind of our Achilles heel. Not that we don't grow, it's just been kind of slow and steady, which is not a bad thing. We kind of prefer it that way. We're probably mostly an introverted company, so we're not out there banging down doors. So that's been a challenge overall. But we've had kind of slow growth. We just never have really focused on that area of the business as much as we have just the product side. But again, the issue is that if you build it it doesn't always work out. So you do have to have some kind of outbound stuff, which has been continually a challenge for us.

Sam Gerdt:

When I think about e-learning, online education, LMS, whatever it is that you want to call it, it seems to me like your platform is built for a very specific set of people with a very specific need.

Sam Gerdt:

It's not necessarily something that somebody goes out and impulse buys, and so marketing that's always going to be a challenge. I guess I'm a little curious too Are there barriers to people adopting e-learning platforms? Because I know you guys are looking at businesses and saying you can use this for training. You're looking at educators and saying you can use this for your students. Are there specific barriers that you've seen, and have those barriers shifted over the course of time?

Mark Johnson:

Yeah, I mean, the barriers mostly are just the amount of prep it takes to create a course or a path. So it's not the kind of thing where you can just sign up and then instantly pay us usually because there's a lot of work in between that kind of trial period and the actual point where you'd want to launch. So a lot of the online learning space is in what you might call the content marketing side, where the sales pitch is basically if you create your own online course, you can make a business working from home. That kind of stuff Usually doesn't work out so well because, again, those people have the same problem where how do you get your course in front of people? And then they buy courses on how to get their courses in front of people and it creates this whole recursive loop. So on that marketing side it's a little less friction in that you can sell your product more easily, but then your customers have that same issue.

Mark Johnson:

So we don't tend to focus solely on that. Like you said, we are kind of a more broad, horizontal platform, which makes it more challenging. In the academic space the challenge is always going to be like the IT department whatever kind of arbitrary requirements that you have to meet, and so that's another game that, in order to play successfully, you have to have a sales team and account managers, and that's also a route we don't really want to go, and so we've kind of stayed in this middle space where we're not really like a content marketing platform, we're not really an academic LMS. We're trying to stay in the middle, which does make it a bit more challenging to find the right people.

Sam Gerdt:

Yeah, there are two challenges that you mentioned and I'm thinking it seems like you guys maybe are working on solutions to them. You mentioned the time it takes to build a course, and I've seen a demo of some new stuff you guys are working on with using AI, like the LLM to help assist in that process of building courses.

Sam Gerdt:

And then the second thing is it seems like there's a greater demand more recently for order courses, online educational content, especially as companies are trying to upscale and as people are recognizing that maybe the information that they learned in school is not necessarily going to stay relevant throughout their careers. So, are you seeing, are you hopeful that those two changes, those two shifts, are going to be beneficial for a platform like Pathright?

Mark Johnson:

Yeah, we think so and we're working on a new version right now that kind of doubles down on that direction. So not necessarily only for shorter form, but making shorter form a more natural part of the product as far as it doesn't feel like it's heavy to get started. The AI I think you mentioned is still something that is in our labs. We're experimenting with it, trying to find the right UX patterns. I think a lot of people are still trying to figure that out, but we have been thinking about it as almost like a templating system on steroids, so you could have a teacher or whoever is creating this path outline their objectives. The AI could ask some clarifying questions until it's got a good idea of what kind of content they're trying to draft, and then it could generate a nice template for you to get started, and we think that will help with that kind of initial like blink page problem that we have. But yeah, it definitely is showing some potential.

Sam Gerdt:

Yeah, when I imagine too there's a temptation to go too far with it to say oh, AI can create my course when maybe that's not the wisest path.

Mark Johnson:

Yeah yeah, I don't think that would be beneficial to most people if, like, you're just being a content middle with the AI. But I think where we're trying to frame it is more on the. This will give you a good draft to start with or maybe give you some ideas for the kind of topics you might want to cover. So it's more of like a brainstorming or a template, but more tailored to you know your specific scenario.

Sam Gerdt:

Yeah, do you see a path right as being something that can augment higher education or even secondary education, or do you see it as being a potential replacement for higher education, secondary education, and I think what I mean there is do you feel that these established means of educating people will stand the test of time, or will new platforms like Pathright come in and disrupt to the point where those older institutions are forced to either change or go away?

Mark Johnson:

Yeah, I think, most likely some of the institutions will go away, but some will change. You know, hopefully the ones that have been able to like keep up with things, which unfortunately is not all of them. But yeah, I do think in the long run we're going to see like a pretty seismic shift in the way the next generation, even the current generation, who's active in this learning space, thinks that learning looks like the shape of learning is changing quite quickly. You know, I think AI, like that we mentioned before, is going to be a big part of that. I'm sure you've probably heard of the six sigma problem.

Mark Johnson:

Basically, the idea was that, you know, just summarizing it, if every student had a personal tutor, then their score results would be like in the 80th percentile or higher, just by having the attention that they need, which the problem is the part of the six sigma, that's.

Mark Johnson:

The problem is that that doesn't scale because you can't give a human tutor to every student you know, in the classroom, and so that's always been the problem that people like we know this would work if people can get personal tutoring and mentoring along with what they're learning, but we can't provide the manpower for it. So now we're in a phase, though, where AI looks like it could at least do a pretty adequate job at that tutoring part. In some ways it's even better than a human tutor not in every way, but in some ways it is because it doesn't get tired, it's always available, it responds immediately, doesn't have its own personal like emotions around what you're asking it to do. So that part, I think, will be a huge shift once people start adapting to hey, this is a thing. Now I have my own personal learning assistant. It's always available.

Sam Gerdt:

Is that something that you foresee all education platforms and institutions wanting to implement sooner rather than later?

Mark Johnson:

I would guess most institutions will be a bit wary of it at first. They already are like. So a lot of schools are banning chat GBD, for instance. Some are. There are a few that are trying to incorporate it too. But I think in any kind of new wave technology, which we're definitely in one of those spaces like, there's gonna be a fear reaction to try to hold it back. But then there will be people that embrace it too much. But eventually it'll even itself out where, yes, I do think that at some point it'll just be assumed.

Mark Johnson:

One analogy I've found helpful and I think will be a likely outcome for AI is that going back to when calculators became a problem for math classes. I think this was in the early 80s. A lot of teachers were freaking out about that, saying this was we can't have calculators in our classroom because what's the point of them working through the problems if they can just type it into their calculator and this is the end of math as we know it, kind of stuff. And the reaction was difficult for math teachers and students who wanted to use these calculators. But now we all know it's like part of the classroom. We all use calculators. I think LLMs in particular, will fill a similar space. They'll be kind of the calculator for language that we use in school, and it'll just be assumed. Eventually It'll be integrated into all your stuff already, though.

Sam Gerdt:

Yeah, I imagine you do have to have certain rules around it. They don't seem like complicated rules, though, to guide students in the usage of this technology. To me it seems like it could be very straightforward and would absolutely be beneficial. I know I've talked with a couple other people already who've talked about this use of things like chat, gpt for tutoring, instructing students in the classroom on how best to augment their learning with an LLM, versus banning it all together, especially if you consider in higher education, you're gonna jump out into the workforce and you're gonna have to use the technology. It's gonna be right there waiting for you. When you talk about in your labs preparing for the future of Pathright, when you look at new technologies like GPT or any other LLM and you say, okay, how do we incorporate that into what we're doing? What's?

Mark Johnson:

the thought process there.

Sam Gerdt:

What are the most important things and what are the things that maybe can be distractions?

Mark Johnson:

Yeah, I mean the lab project itself, like speaking personally, can be a distraction in the sense that it sometimes is more exciting than the kind of stuff you're doing day to day. True, but it's kind of a I don't know. I personally think you just kind of have to try a bunch of stuff before you find the right solution, and so we use our lab, but we have an official lab project status at work. We have this kind of three quadrant model we work in, but labs are on the very far right of it and then the middle. We're actually like this is where our actual project work is, and on the left is where we're trying to like, maintain and support things that we've already built. So we kind of divide it in that way and I love working in the lab, but I need to be sometimes focusing on that middle space too.

Mark Johnson:

But as far as how we evaluate it, we usually will make prototypes. So, like you know, like you would in a lab, like a product lab, we make like little one-off demos and tests. Sometimes we show them to the team or run them by a few customers, like on the path AI, when we actually put out there on Twitter and had a newsletters, like a news list, sign up to see if people were interested. So we'll do that kind of stuff to this kind of lightweight prototype and then eventually those things will make their way into like the real project and by the time they get in there, after having been in the lab a bit, they usually look quite different to what we started out with. But having that kind of separate intentional space, though, where we do that kind of R&D, has been really helpful, because you can kind of silo it off as like okay, this is the lab stuff, this is like the project stuff. We know, you know we're moving on in. Like having those segments makes it easier to manage and even to think about.

Sam Gerdt:

Really, yeah, there's a pretty popular video out there where Elon Musk is talking about engineering problems and he makes the comment that one of the biggest traps for engineers biggest waste of time, is trying to optimize a process that should be eliminated. How do you approach that when you're looking at developing a product and when you're looking at over time, maintaining and perfecting and adapting, how do you measure what should and shouldn't belong?

Mark Johnson:

Well, I mean, one way to do it is to make sure you have a balanced team and even on like just the pure, like engineering development side, especially when you're trying to build a platform. So if you're trying to build something that has to last 10 years, you have to kind of like have some people that are more on the edge of the technology landscape, so like what's coming, what's down the wire, so that we're making decisions that can, you know, have a good longevity. But then you also need people who are going to like focus on stability and maintenance and quality assurance, and those are different personalities really, like you could say they're different skills, but I think they're actually more aligned with personality than they are with skill. Personality will tend to shape the skill, but there are some people that really like a checklist.

Mark Johnson:

They really like that you know, let's check everything off and make sure we're ready for deploy. Personally, I hate that. That's not what I like to work. I like to work in a wide, open space where there's lots of possibilities and I can kind of spiral around and find stuff. Of course, you want people to be able to do both things, but I think if you can balance your team kind of across that spectrum of like people working in the lab, people like making stuff happen and getting things done, and then people who are maintaining and doing support, that can really help a lot and like trying to balance out that tendency for things to get either decay and become stagnant or get so like pie in the sky that you never actually do anything practical.

Sam Gerdt:

As you're then developing. Obviously, you're interacting with the public on this. Obviously, you're getting feedback. What are clients demanding of you in terms of your platform, in terms of future roadmap? Is AI something that's in high demand from the average person, or is this something that's just hype?

Mark Johnson:

You know, I think that from our customer's perspective what I've seen anyway like people are very curious about it, but they don't really have like opinion as far as like we need AI now necessarily. It's kind of more like I don't know what's gonna happen with this. Is this AI gonna replace me or is it gonna make me more powerful? It's kind of the general question. You know, I think that everybody's asking, so I don't know if I would say that any of them are demanding it there. Our customer needs are more generally on the practical side, like I need more versatile ways to sell my course or I need this one feature for peer review, or whatever.

Sam Gerdt:

Well, I was gonna ask you is there a time that you can think of where maybe a feature was developed because of feedback from clients, like overwhelming requests for, and how do you respond when something like that happens, like, do you prioritize it, do you put it on a roadmap to appease and then develop it in a normal timeframe?

Mark Johnson:

Yeah, that's a great question. We have a fairly unique way of dealing with that, especially early on in the company. So we didn't raise any seed money or venture capital, so we were to completely bootstrapped. But that does mean that you have to come up with some model to keep the bills paid while you're developing out this platform. That has no customers, yeah, and so our way of doing that was what we called a partner model.

Mark Johnson:

So we would work with specific customers who had like online learning goals, basically, and we would give them a proposal. Basically, we'd give them a pitch that said, hey, we're developing this platform. Like if you will fund X percentage of this feature, then we'll give it to you for free, or like we'll give you usage, whatever, but we'll also like work with you to make sure it's meeting exactly what you need. So a level of service that you wouldn't usually get from like a software as a service type of products. But we also got the benefit of raising a good amount of funding, probably more than you would get in a regular like seed round using that method without giving away any equity at all. But we also have customers that are it's real customers. We're not guessing what they want because they're actually paying us to do it. So we found that model works quite well. There are some gotchas with it, but overall we've been pretty happy with it.

Sam Gerdt:

Yeah, so give me an example of one of those features that was developed just by popular demand.

Mark Johnson:

I don't know if it like so much popular demand as like opportunistic demand, in that, like there was a several features we built around competency based education models where we had several different partners who were wanting to fund the same thing at the same time, and so you know three or four trying to fund the same basic venture, and so they're trying to do learning models at center around mentorship and the learner, not so much in a program but as like an adaptive path kind of structure where they have different mentors giving them feedback on different areas. And so we built some features specifically for that use case because it aligned with where we wanted the head as a platform anyway, but also these partners are willing to fund it as well, and so that is one example, but we've done, you know, quite a few different features that way.

Sam Gerdt:

In terms of your user base? Do you feel like it's weighted more on the large scale, like projects and courses that are meant for a lot of people more broadly, or is it? Are you currently seeing more like one on one, one on two, like mentorship type programs?

Mark Johnson:

I would say that I don't. That's I. Really. I should probably know that statistic like what the average like cohort size is for a course, I would guess. If I had to guess just based on what I've seen, it would usually be in like the 30 to 40 range maybe. So maybe medium size. But there are cases where we have, like single students. Like you know, there are some home schoolers that use the platform that they just have one or two kids on there. Got you?

Mark Johnson:

And they're you know, adding stuff to their path. But then we have also have some courses that are outliers, that have 10,000 students in them, so it's kind of all over the map.

Sam Gerdt:

Do you use the product in your own marketing and attempts to grow the company? Do you guys make your own courses?

Mark Johnson:

Yeah, yeah, and actually that's our kind of double-down strategy for marketing.

Mark Johnson:

As I said before, we've always had a little bit of a harder time with outbound marketing, but one thing we can do really well is make paths.

Mark Johnson:

So we make a tool to make paths and we can make paths for our customers, and we do it all the time. But we've started to actually, just a few weeks ago, we started a new 10-week cycle where we're trying to make more outbound paths, you might say so public paths that will help other people know how to use certain technologies. We're working on one right now for, like, chat, gpt, for learning. So how do you use that to actually learn, not just teach, but actually if you're trying to research something but we've got one coming up on using butter, which is like a better alternative to zoom for live sessions, but we're gonna be doing a bunch of those type of things using Zapier to automate workflows, and so that'll be. One strategy is trying to make public paths that people can use. But the benefit of that is we get to help people by learning whatever it is that we're trying to help them with, but also they get to use the platform, you know, while they're learning that.

Sam Gerdt:

One of the things that I've come to appreciate and I think it's still early to tell where we're going with this but there's a trend in business to use education as a marketing tool. There's a trend to try to offer value to the people that you're trying to reach by educating them, and no strings attached, we're just gonna give you a skill, and that's something that we've actually tried. I work for an agency. We've tried that here. We have had success with that here. You guys have tried that. Is that a model that you have picked up on? Is that something that you would consider in the future in terms of outbound marketing for your own platform?

Mark Johnson:

Would you?

Sam Gerdt:

consider approaching businesses and saying hey, let's almost turning your platform into a marketing platform.

Mark Johnson:

I mean, I think it was the I'm not sure if you're familiar with Basecamp or 37 Signals, but Jason Fried and those guys, I think they had a phrase that we thought about a lot early on but is like don't outsell, I'll teach, which I really do prefer that approach. I think it's more natural and humanistic to do that approach than trying to get people to click and add, for instance, so like just genuinely sharing what you know. Hopefully that leads to people learning and using your product in a way that will make them curious and want to build their own stuff on it. But yeah, we definitely are leaning towards that direction. For sure I wouldn't say we're like super proficient at it yet, but we're definitely. That's where we want to go. Is that out teach, not outsell approach?

Sam Gerdt:

I want to switch gears real quick because I've been following you for a while and you've been fairly vocal on social media with the chat GPT LLMs, and you've been particularly vocal against some of the doom and gloom, against some of the more fantastic claims. I want to first ask how did you become aware of what was happening and how was your thinking shaped with regards to the current hype?

Mark Johnson:

I mean I've been using GPT when it was three, so not 3.5. When it first came out I was fortunate enough to get off the wait list pretty quickly and start playing with it, and that was kind of like the Wild West era because no one had access yet. So everybody was trying to figure it out and you could find stuff on Reddit or whatever. But I guess from the beginning I always just saw it as a tool and a very cool one, extremely powerful, like what it could do and the potential it unlocks. The fact that we basically taught Silicon to speak, we taught our computers to talk English to us and we can talk. I mean it's just insane, that level. That is a huge, huge leap.

Mark Johnson:

And even though GPT 3 wasn't nearly as good as GPT 4, it was still very impressive and so I guess I've always been more on the optimistic utility side of it. But also, it's not really a mystery how it works, like it's just statistical models that are trying to predict the next token, and that sounds like really trivial and it's very profound that it's as powerful as it is. But that's why I think a lot of the demerism that assumes some sort of level of consciousness to the AI and agency that it just isn't there is. I don't completely understand where they're coming from, but also I think it adds a lot of noise and kind of click baby distractions to what could be more healthy conversations to be had around AI and safety, because there are real problems with AI, it's just not the ones that it's going to go rogue and become skynet and kill us all.

Sam Gerdt:

Yeah, well, it's interesting, you know. On the one hand, I think you have people who do know better and who are taking advantage. There's, there's money to be made. People are paying attention, so they're using the attention. There's certainly some of that. There's also, like you said, that fundamental misunderstanding. I think the problem is that it is language, it is we have this, we have this, we call it. You know. Think about the Turing test. And when can we stop? Like, obviously, if you're looking at a machine that's spitting out code or that's talking to you in a computer voice, you're not going to pass the Turing test. But all of a sudden you've got you know, we've become accustomed to these chatbots and receiving emails, and all of a sudden you've got a computer that talks to you in much the same way and it sounds like any other customer service agent you've dealt with or you know stuff like that there's.

Sam Gerdt:

I think it's too easy to game. I think it's too easy to game language, the idea that it's talking to you and it's faking this humanness that. I think, has tripped up a lot of people.

Mark Johnson:

Yeah, that's a great point. I think we're all very susceptible to language in especially very fluent language, and so you can see this in politicians, where there's like a shell of a person but it's very linguistically fluent, it can go a long way Just being able to deliver speeches in the right tone or whatever, and these models are only going to become more fluent.

Mark Johnson:

So it's not something that's, you know, going to go away. But I agree 100%. I think we're very easily fooled by fluency and the fact that there's no, there's nothing but like a frontal lobe there with the AI doesn't really occur to people because it's so fluent that it appears to have some sort of conscious layer. And I mean, people love to anthropomorphize anyway, like you know, kids that anthropomorphize everything. If you have kids, you know that, and I think adults are the same way.

Mark Johnson:

we tend to anthropomorphize things way more than we think we would, and I think when you have something that's fluent, it's almost impossible not to. So, yeah, I would guess a lot of the more extreme views of what AI is capable of come from that type of bias getting tricked by the language, basically.

Sam Gerdt:

Fluency is going to become an interesting thing, I think, at coming up, because the more fluent it gets, the more nuanced the input output of the math engine behind the scenes. You're gonna have these situations where the hacks that people use to essentially to jailbreak these things and dump memory that they're not supposed to dump are going to look less and less perceptible. They're not gonna look like a bunch of hashes and a bunch of. It's not gonna look like a gobbledygook, it's gonna look like natural language. And then I think the flip side of that is it's just going to give this much greater appearance of intelligence without anything. Like you said, it's just a frontal lobe which is probably gonna exacerbate the problem.

Mark Johnson:

Oh, absolutely yeah. I like that idea that a more natural language way of like. It's almost like we need a new Turing test. That is more, it reminds me of Blade Runner, the Voigt-Kampf test, as I always call it, the Voigt-Kampf. It's like this interview that Harrison Ford gives to the replicant to see if they're a human or not.

Mark Johnson:

It's like we kind of need one version of that to in the future, I think we'll need something to be able to detect whether we're talking to a real human agent or not. I mean, you can kind of tell if you interact with AI enough. Now I can spot GPT-4 writing. If you're not, you know clever about the prompting or whatever, but eventually, though, that's just gonna be much more difficult to spot, and there won't be like a reliable way of knowing you know if you're talking to a real human being or some sort of AI agent.

Sam Gerdt:

Well, the one thing I think that gets me the most excited about a natural language processor like GPT is, all of a sudden we have a much better UI than you know keyboard, mouse, monitor setup that we're so accustomed to. It makes me really happy when I see people recognizing the LLM as being a UI and not necessarily like an AI.

Mark Johnson:

Yeah, yeah, that's like a. That's a great way to think about it and I think most of the like, the more profound leaps that happen because of AI will be come from thinking about it more that way. So, like, what affordances does this open up that weren't there before? Like the fact that we can speak to our computers and they can speak back to us. Like, what does that enable? That was not previously possible. And like, we know some of those answers, but we're still in like the late 90s era.

Mark Johnson:

I feel like, like, if you compare that to the development of the web, we just got AOL, like chat. Gpt is basically AOL for AI, but that's a very simple basic version of what these AIs could be and like what they could be. We can't quite imagine yet because we haven't had enough time, but I agree, I think the thinking of them as UIs, like what do these enable that were, no, not previously possible. That's the direction that I think people need to go in, and I would guess that there are some major leaps that are going to be made just solely through that category. Not necessarily through better AIs, but just thinking of what things that enables us to do that we couldn't do before.

Sam Gerdt:

Yeah, have you used GPT or any others to code?

Mark Johnson:

Yeah, yeah, I use GitHub Co-Pilot, which is based on not sure which model GPT 3.5 maybe and it's fine-tuned for coding, but it does a great job. It's like it's auto-complete on steroids. It's kind of dumb-founding sometimes when I'm writing code and then all of a sudden it's suggesting like what I would have written anyway, yeah, which is kind of a cool experience. I don't even really think about it much now, I'm so used to it, but yeah it's pretty amazing.

Sam Gerdt:

So a long time ago, we went from everything just needing to be coded to, all of a sudden, this prevalence of libraries you think of all the libraries that are out there that you can just load and all of a sudden you have these massive code bases at your disposal. And now you have tools like Co-Pilot In terms of leveling up of efficiencies. What does the current state of generative AI with coding? What have we seen in terms of percentage, you think?

Mark Johnson:

I mean, I think GitHub did a study on that or a survey. I can't remember what the percentages were, but I think it was like 40% or something of lines of code were written with Co-Pilot in these code bases they analyzed. I don't remember the exact numbers, I could be misquoting it, but it was a lot higher than you might expect. And I think you could also look at statistics.

Mark Johnson:

With Stack Overflow, which used to be like just the de facto source for where programmers would go to look for answers and it's not that it's not used anymore, it's just most a lot of programs are now just asking chatDBT and instead of going to Stack Overflow, so they're not getting nearly as much traffic, and I find myself doing that all the time. The benefit you have from again the affordance that a chat-based interface opens up there is that I can ask my question. It can give an answer, but I can ask again and again and again. I can refine, I can tell it to ask me follow-up questions if I'm being unclear about something, or I can ask it the definition of some word it used in real time. So that's not really something that you could do before. That's a new affordance, but it's such a better experience that sites like Stack Overflow and stuff like that are gonna become less and less relevant.

Sam Gerdt:

So where's the limit then, maybe, in terms of, just generally, where's the limit of the LLM as an AI? And then also, does an LLM ever truly learn to code?

Mark Johnson:

Yeah, those are both good questions. As far as the way the ceiling is, I don't that's hard to predict. It's like the.

Mark Johnson:

I mean I don't know if you follow the image AI communities at all, like mid-journey stable diffusion but when those first came out like Dolly was the first one that was like that was from OpenAI it was like coherent enough that it's like, oh, this looks like legit images. And when it first came out, people are like blowing away, this is amazing. Just a few months later, you know you got mid-journey, which is significantly better, and now you've got mid-journey five or whatever it is. They're on In stable fusion XL that make this dolly stuff look like just trash. Like you look back on it.

Mark Johnson:

Like why were we impressed by that? It reminds me a lot of like early 3d games, like Mario 64 3d. It's like amazing, but now you look at it and it's super blocky. I mean, still really well designed games, but like the tech itself hasn't really aged well, and so you have these kind of blinders when it comes to like very new Technology where you don't know how to evaluate it because there's nothing to compare with it, and I still feel like we're in that space. So, as far as where the ceiling is, like we can't, I Don't know, like I guess we'll know it when we see it and it doesn't seem like so far. Anyway, based on capabilities, like the test scores that are a GPT for can get now with like the SAT and even the bar exam, just keep getting better and like GPT five will probably ace it.

Sam Gerdt:

Who knows, there does seem to be a diminishing return, though, as they train. You know they talk about training larger and larger models there does Diminishing returns and performance.

Mark Johnson:

Yeah, you can only get so large really once you've crawled like most of the corpus that's available. You know, I think there are techniques that people are working on that are more interesting, like some of the papers coming out about Training and fine-tuning, even on very small data sets, that show like very good results. Those have like high potential. There's also multi, multi-modal stuff, which I think will be a game changer.

Sam Gerdt:

Yeah, that now that that were, we've left LLM now and we're talking about more broadly Introducing other models.

Sam Gerdt:

I mean even even when you talk about images. I think we've left LLM at that point, because those are more of a diffusion model. What intrigues me about those is you have far fewer parameters. I mean by a factor of maybe a hundred. You know a couple billion versus a couple hundred billion parameters to produce these incredible images. So yeah, I'm just, I was just curious your thoughts on you know when the ceiling is, because I see people talking about you know you know we're gonna turn on another thousand GPUs and train. You know this massive model and my, my thinking is my understanding is what kind of a performance increase is that gonna get you, and is that even gonna unlock any new capability, or is it just gonna get you a higher score on some?

Sam Gerdt:

you know really, comprehension test or standardized test.

Mark Johnson:

Yeah, that's a great question. I mean the first leap in Like LLMs like they were kind of boring before GPT3 Was created by having like a much larger training data set but specifically trained on like a different gradient, descent, back, propagation stuff. It was trained on a different style but with a much larger corpus than it had ever been done in the. The amount of fluency gained from doing that was Substantial, like. So there definitely was a huge leap there from like GPT1 to GPT2 to 3, 3 being like the biggest by far. There was a substantial leap In capability, and so it could be that, you know, throwing more training data does make marginal improvements, but again you're gonna run out of data in like they've done tests already with like the obvious thing to think about is like Well, why don't we use AI to generate a bunch of quality data and then we'll ingest that and, like you kind of feed it back into the machine? They've done some tests on that and that just leads to gibberish, which is very interesting, you know.

Mark Johnson:

I think the same thing is true with image. Ai is where you produce a bunch of like you can even see this with mid-journey. It's very impressive. But like they fine-tune their algorithms based on people in their discord that are ranking up images or downloading them or whatever, and then eventually it kind of normalizes to this thing. That is a very mid-journey image and you recognize it. It's right out of the gate. If you're familiar with that style, I think the same would be true, you know, when you're training on an Lons own data you're just gonna get. Eventually you're gonna get to very recognizable AI Text.

Sam Gerdt:

So well, and you even said you can recognize GPT for when you see it. Mm-hmm anybody who uses these tools with with any amount of regularity you tend to, you tend to notice it's almost like it has its own subtle voice.

Mark Johnson:

It does.

Mark Johnson:

Yep, yeah, you develop like kind of an intuition for it that you can, okay, this is AI, you know, whether it's an image or text, you can kind of see it.

Mark Johnson:

Eventually, you know, I would guess I mean I don't know I would guess that the more people get exposed to AI, the more will General intuition about it will develop and maybe people will be able to see it the couldn't before. I mean, maybe it's similar to how CGI of Developed in movies when it, you know, when it first started becoming a thing you had, it was amazing and, like you know, even the Star Wars episode one just blowing everybody out of the water and massive budgets. But if you look at it now, it's like, you know, I don't know, it kind of detracts from the experience because it's not really that Impressive to us and it's very easily recognizable. But even in modern Marvel movies or whatever that have like the huge budgets and everything top of the line CGI, you can still recognize it, like you kind of develop an eye for what's real and what's not. I think we'll probably do the same thing with AI.

Sam Gerdt:

Yeah, I have. I have this Unfounded theory, as I, you know, as I learn more about this, as I work with the tools, as I work with people who you talk about the tools I have this, this developing theory, that the ceiling with this current round of AI and I can't see too far into the future, obviously, but the ceiling is very little to do with the hardware or the software and more to do with the people.

Mark Johnson:

Yeah, I would agree with that.

Sam Gerdt:

We just, we just are gonna stop Wanting it at a certain point. We're gonna say you know, the AI is good up to this point, and now we want people, now we want to deal with humans, and, and so you talk about job replacement, you talk about, you know, agents, the, at the end of the day, people are going to want to talk with people, they're going to want to do business with people, they're going. Yeah, they're going to want to make that connection.

Mark Johnson:

Yeah, I mean, generally people are Not really interested in technology, like I mean, some people are. If they're a tech enthusiast like I am and you probably are but like the average person maybe thinks about technology Like a few times a week and then it's probably a negative thing because it's not working. But, yeah, I mean, people are much more interested in human beings than anything else in the world and I think you could see this clearly with, like AI art, no matter how impressive it gets. I doubt there's many people hanging that on their wall like, yeah, the, the connection to art is gonna be the human who painted it. You know so if you bought something, even from someone you have a parasocial relationship with on Instagram, it still is more meaningful to you than like something you generated with mid-journey. Even though that mid-journey image may be technically more impressive, the fact that it didn't come from another human being is like a significant factor.

Sam Gerdt:

Yeah, we're familiar with that concept of hotel art, the idea that you have art that simply is designed to fill a space Right, versus art that we have some kind of connection with good analogy yeah. I think, I think AI does a great job with hotel art.

Mark Johnson:

Yeah, and hotel writing. I guess you could say, if you want to stretch that, for yeah, yeah, yeah, and there are some artists that will lose out because of that.

Sam Gerdt:

I think you know, I think the print market the idea that you've painted something, now you make endless prints of it and you sell the prints the print market's gonna see a decline because of this, because the audience for a print is gonna be more akin to the audience who's trying to fill a space, less akin to an audience who is appreciating the art for the artist or for some deeper meaning.

Mark Johnson:

Yeah, especially like a digital print. I mean, those are difficult enough to sell already. They're definitely not gonna get any easier with this kind of thing.

Sam Gerdt:

That's a really interesting point and that's really why I wanted to hear about what your clients are talking to you about as business people. Are we imagining that AI, that there's some burden on us to introduce these tools to our products, or is that just us reading the news and fearing? What do our clients actually want? Is there this great demand for AI agents, for AI tools?

Mark Johnson:

Yeah, there is definitely the hype cycle Like you can take advantage of by even using the word AI in your marketing right now, but that's just temporary. Like that won't always be something you can take advantage of and I think long-term, I think it's just very practical. Like, use AI in ways that help you Like. So if you don't wanna spend X amount of time reviewing this privacy policy to see if there's anything I need to be concerned about, let AI do that for you. If I don't wanna draft this legal document and then myself, let AI. There's fine-tuned AI is now being made for law that can draft those kind of documents.

Mark Johnson:

So I think an optimistic view would be that AI could replace what we might call dehumanizing work.

Mark Johnson:

So work that is drudgery, work that is monotony, and we have seen this happen already in industry. So, like industrial manufacturing and robotics taking some people's jobs in that, economically that is difficult, but in the end it is work that humans ideally, in an ideal world again being idealistic shouldn't need to spend 40, 50 hours a week turning a screw on something. And the same is true in the digital space, where we've basically got our equivalence to like just doing the same job over and over and over. It's meaningless, it's not creative, it's detached from the impact that that thing has because you're some part of the paper chain far down a bureaucracy. I would love it if AI could replace all of that stuff. Of course there's gonna be economic disruption because of that, but I think we'll be in a better place, like from a humanity-wide perspective, if we can automate just the tedium of the work and then instead, like, really put more value and emphasis on the things that humans truly enjoy, not the pushing paper part of our jobs.

Sam Gerdt:

Right, and if what humans crave is human connection, then removing the drudgery shouldn't be something that we fear because, what it's going to do is increase what we want and, in the process, probably increase our wealth.

Mark Johnson:

Yeah, I would think so. I think that's the optimistic view. Obviously there's more of a pessimistic view that there's a huge amount of what would have been considered white-collar jobs that are disappearing and then that has a trickle-down effect on education, which is already having a hard time. Like a lot of the more popular degrees are some of the ones that would be most targeted for replacement, like, for instance, entry-level law positions, paralegals, stuff like that, a lot of these. Even accounting, you have to put in your churn, which again dehumanizing work, but people will spend five years doing auditing to be able to raise the ranks of whatever accounting firm they're working on and it's part of like it's a rite of passage. But where does that rite of passage go when that stuff is all automated completely? So it's going to take some time for those industries, I think, to adjust and then obviously it's going to be very disruptive to a lot of the job market.

Mark Johnson:

And so it's not all positive in the short term, but I think in the long term, I think it could be quite positive.

Sam Gerdt:

Absolutely, and for those to whom it will be negative in the short term, there are still ample opportunities being created. It's not that there's this vacuum that happens in the short term, but it does require you to change, and that can be uncomfortable, I think.

Mark Johnson:

Yeah, definitely.

Sam Gerdt:

There's not going to be no jobs. There's going to be different jobs, there's going to be changing jobs, but I don't think that there's a scenario where there's no jobs.

Mark Johnson:

Yeah, I agree, and just like the early internet, when it first was emerging onto the scene and people didn't know what to do with it, venture capitalists were throwing way too much money at it. And then school thought I remember reading articles even way back in magazines because we were still reading printed magazines that the internet was going to replace education and it was going to replace you don't need to go to school if you can look anything up online. And that didn't happen, and so it definitely changed those things. We can say that for sure, but it doesn't normally replace them, and I think AI will probably be similar where it definitely will change them, but how it will do that is like. Again, it's hard to predict, but I think a lot of those jobs will just be reimagined or reimagined, assuming that we now have AI as an augmentation tool with us.

Sam Gerdt:

Yeah, the idea that we get it right the first time is obviously that's not going to happen, even with the internet. There was no doubt that the internet was a good thing from the very beginning, but you still had the dot-com bubble, you still had all kinds of discomfort and bad stuff that we had to get through to get to where we are now.

Mark Johnson:

And.

Sam Gerdt:

AI will be no different for sure.

Mark Johnson:

Oh, absolutely yeah, I think we're still in that kind of late 90s phase. We may have not even hit the bust phase of it yet, but yeah, I think I mean we are seeing from an economic standpoint, there is similar dynamics in the play, with lots of cash being thrown into these AI companies, like really big moves like Microsoft buying half of open AI, for I can't remember how many billion, but that's not going to slow down anytime soon.

Mark Johnson:

But at some point I think probably when people are kind of saturated and AI is not as sexy as it is right now that some of the bottom is going to fall out of that market and we might see a similar bust to the dot-com crash. I don't know, but it could be.

Sam Gerdt:

Yeah. So as a product developer, then how do you position yourself going into all of this? Obviously, you don't want to throw everything into all your eggs in one basket, but what are you doing to ensure that you are taking the tempered, human approach to innovation and so that, whenever there is discomfort, disruption bust whatever. It looks like that you're there on the other side?

Mark Johnson:

Yeah, I think there are probably two answers. One is more like on the product side. We, in a sense our strategy is just kind of like what's use AI where it helps and then we'll double down on the things that we know work already just based on having thousands of customers. Making those things better is not necessarily going to be influenced by AI or not, whereas there may be parts of it though that could be automated. For instance, scheduling or scheduling due dates first things is a pain, like scheduling in general is a pain If we can use AI to help people describe their class schedule. We work in planning, which we meet every day on Monday or every week on Monday at 6.30, and then we can automatically set the dates. That's like a painkiller type of feature that we think in but it doesn't really matter that that's AI or not.

Mark Johnson:

That's just a capability we have now. That could be just some other algorithm. That's not an LLM, it doesn't really matter. It's just a way of making the product more seamless. And then, on the other side, we have been trying to intentionally speak about some of the change that's happening in public, and so we had a path camp recently I think maybe a month ago when we had several topics, one of which was AI, where we had we invited people in you know, experts and stuff to talk about different things and we invite all our customers to come to that and we just have kind of an open-floor Discussion and those are always really educational for us and for the customers.

Mark Johnson:

We hope Um, but that's one way we do it too is just by talking about it with our partners, you know, with our customers staying attached to like the real human need and not just the yeah online, like a step of what's happening.

Sam Gerdt:

So for the product developer, the approach should be Stay connected with the people that you're serving. Mm-hmm and don't develop out of a sense of urgency, develop out of a sense of Of Right, thoughtfulness, rationality.

Mark Johnson:

Yeah, I mean nothing good ever happens out of fear or anxiety, right. And so, like the yeah you it, I mean I won't. I'll be honest, like when I first saw some of the stuff that I was capable of, I did like think, are we Done? Like you know, is education could shift enough under our feet just because these development that people even need a product like Bath rate. But when you come back down to it and you start thinking about again, like the real human needs that people have and what a which parts AI fills and which parts it doesn't, and just human connection in general, which is an extremely important part of education, then a lot of those fears go away. So I think you just kind of have to redouble down your focus on who are your customers, who are your, what are they needing, talk to them and obviously you know, think about this AI stuff. But if you put the AI first, I feel like you get the cart before the horse and like, I don't know, you can lose sight of the forest for the trees.

Sam Gerdt:

Yeah, and and maybe it's a product before people problem, yeah, there's, there's an. I think there's something analogous maybe when you look at Kodak. So the fit the film Mm-hmm. There were. There were companies Kodak, I think was one of them who saw themselves as well were a film company, right, and it was about the tech. And then you saw the Uprising, digital and all of this other stuff, and At no point did someone take a step back and say, no, we're not a film company, we are a company that that captures people's memories.

Sam Gerdt:

We're a company that that preserves, you know Something special and and as soon as you abstract it, you know even just that one step back you realize okay, well then it's not about the tech, we can. We can still be us without losing our identity, and Advance into this, into this new.

Mark Johnson:

Era. Absolutely, yeah, and I think that takes like especially difficult if you've been in business for a long time and you kind of like this is the way it works. We know this works like. This is what our customers are paying us for. Trying to find that original like reason that why you're doing this in the first place can get even harder, kind of strangely. Then it was at the beginning to be able to see that very clearly.

Sam Gerdt:

Yeah, that's such a necessary exercise, though, for any business absolutely just have that, have that intrinsic Connection with the people that you're serving and the need that you're meeting, and be able to separate out either your actual product or your actual services from it and say these things can all change. What can't change is this identity who we serve, what we need, we meet.

Mark Johnson:

Yeah, that kind of core value that you've got at the very bottom. But you're right that that can be easy to Forget about or have never identified in the first place. So it's pretty important space to explore.

Sam Gerdt:

Yeah, this has been an incredible talk. I really appreciate your time.

Mark Johnson:

Yeah.

Sam Gerdt:

I will. I think we'll end it there. That's such a good note to end on, just remembering that your product is not your identity. So is there anything else that you wanted to Do? Is there anything going on with path right upcoming that you want to talk about?

Mark Johnson:

Oh sure, I mean, we've got a new product in the works actually. So, um, it won't be launching this year but, um, sometime early next year we're planning on launching a new product that you know. We'll have some of this AI we've talked about, but I think it'll do what we said there at the end. Try to double down on the core value of paths, um, so, if you would like to follow us on that, you could, you know, look at our website, path 3.com or um, and that's wright, like shipwright or playwright. Yes and uh, we'll, we'll definitely post announcements there. We also do host labs every now and then. Um, you know, sometimes, on the topic of AI, we also launch courses. So if you're interested in, like, learning about that kind of stuff, we'll be posting those as well. Uh, we'll be posting those as well.

Sam Gerdt:

Excellent. Uh, thank you, mark. I really appreciate the time.

Mark Johnson:

Thank you.

Sam Gerdt:

Thanks for talking with me.

Mark Johnson:

Yeah, it was fun, thanks.