> The parents' case hangs largely on the student handbook's lack of a specific statement about AI, even though that same handbook bans unauthorized use of technology. "They told us our son cheated on a paper, which is not what happened," Jennifer Harris told WCVB last month. "They basically punished him for a rule that doesn't exist."
I'm going out on a limb here, but if this is the viewpoint of the people who raised him, then I'm not surprised he cheated.
If this was my son and the facts were the same, he'd be grounded in addition to whatever consequence the school deems fit.
What is unauthorized use of technology? Is the light I need to read not technology? Is using the internet to find more about a topic not technology? Where is the line that makes AI forbidden?
I think the throwaway actually raises the valid point about the rule being an exceedingly broad catchall. The type primed for selective and weaponized enforcement.
That said, the kids are clearly defenseless in this situation, for blatant plagiarism as well as just being just being factually incorrect in their report.
Yes, it is broad, and probably a bad rule. That said, there is more than enough than that simple rule in this case that points toward intentional academic dishonesty. If he was my son, getting off on a technicality isn’t exoneration in my house.
The part where students were given explicit guidance on the use of AI resources and told how to cite it appropriately. Besides, even aside from the use of technology it’s still a blatant case of plagiarism as he passed off work he did not write as his own.
Have you actually read the piece? The answers to those is in the written policy the student was given. But even without the policy, it should be pretty clear that passing others' work as your own (be they people or AI) is academic dishonesty.
As judge said, "the emergence of generative AI may present some nuanced challenges for educators, the issue here is not particularly nuanced"
The question is who comes up with words. If you re-type textbook, you are plagiarizing. Same happens if you re-type ChatGPT output.
On the other hand, if you read some text first (be it ChatGPT's output, or a textbook) and then rephrase it yourself, then you are the author.
How much you have to rephrase? Is changing every other word with synonym enough? That's actually a gray area, and it depends on the teacher. Most teachers would expect you to at least change sentence structure. But in this case it's completely irrelevant, as we know the students did copy/paste.
I really don't see why you are trying to present ChatGPT like something special re plagiarism. Copying other's work is copying. Paying $10 to someone to do your homework and then copying their answer as-is is cheating. So is using ChatGPT yo do it for free.
In education, the goal is internalizing to the individual the knowledge required for them to bootstrap into a useful, contributing member of society. Things like composition of written works, organizing one's thoughts into communicable artifacts, doing basic mathematics, familiarity with the local and encompassing polity and it's history, how to navigate and utilize institutions of research (libraries) etc... Any technology employed that prevents or sidesteps that internalization is unauthorized.
It ain't that hard to connect the dots unless you're going out of your way to not connect the dots.
The student was not punished for "using AI", but for plagiarism:
>The incident occurred in December 2023 when RNH was a junior. The school determined that RNH and another student "had cheated on an AP US History project by attempting to pass off, as their own work, material that they had taken from a generative artificial intelligence ('AI') application," Levenson wrote. "Although students were permitted to use AI to brainstorm topics and identify sources, in this instance the students had indiscriminately copied and pasted text from the AI application, including citations to nonexistent books (i.e., AI hallucinations)."
so they were caught due to the citations to non-existing books. it seems fine and unconnected to all of the controversial news stories of using AI detection, which has a high rate of false positives
The teacher noticed a few irregularities such as the time the student spent inside the submitted project. The cheater student only had around 55 mins logged whereas everyone else had 7-8 hours. That set off alarm bells for the teacher who then actually looked into the paper more and noticed fake citations and it was flagged as “ai” generated when they ran it through a few ai detectors.
Probably unpopular opinion here but families, usually wealthy, that use the legal system like this to avoid consequences are parasites. It reveals not only your poor job of raising your children. But also the poor character of the parents.
Glad the courts didn’t grant a similar “affluenza” ruling here. The student plagiarized, short and simple.
What's striking to me is that the parents sued. RNH passed off AI-generated text as their own when they knew how to cite AI generated works and were versed in academic integrity. It wouldn't occur to me to sue the school if this was my kid.
Admissions know his name and the name of the school, which helps find specific students.
It’s easy to miss, but I wouldn’t be surprised if it comes up as “Hingham High School Harris” brings up the relevant info. Further, his parents suing may be a larger issue for a college than his behavior.
I'm guessing at some point there will be LLMs trawling through news items to put together profiles for people, and as the cost comes down, it won't just be available to three letter agencies and ad platforms, but schools and employers will start to use them like credit scores.
Nope. I just replied above with a similar story when I was in school. My classmate got expelled for cheating and sued the school. tv segment, articles about him, etc.
Zero effect on his college outcomes. Got into really good schools.
If I were in college admissions then I'd probably think twice about admitting the candidate with a widely reported history of trying to sue their school on frivolous grounds when things don't go their way.
It strikes me that this is a foolish take to adopt.
I saw lots of students acting a bit like this but I was grateful that I could dedicate myself primarily to my schooling and took as much advantage as I could to learn as much as I could.
The credential gets used as a heuristic for the learning you do but if you show up and don't have and knowledge, then everything is harder and your labor more fruitless.
I know some people don't care and that there are degenerate workplaces but you'll still be left with having been a lot less useful in your life than you were capable of being.
On the other hand, the school caved on National Honor Society after the parents filed. So maybe the best move would have been (tactically, not as a parent) to show the school the draft complaint but never file it.
Almost zero downside. I knew a student who plagiarized 3x so they got kicked out. His parents sued. It was even on the tv news because they were asking for hundreds of thousands in compensation. He lost and the school kept him expelled.
I was expecting the bad press coverage to hurt his college chances since there were several articles online about him getting kicked out for cheating and then suing.
Nope! Dude got into a really good school. He even ended up texting asking me for past essays I wrote to turn in as his own to his college classes.
And the kicker was he then transferred to one of the prestigious military academies that supposedly upholds honor and integrity.
So. There is almost zero downside for suing even if it gets you tons of negative publicity.
I don't think we can claim zero downside from one anecdote. There are always outliers that can occur from extenuating circumstances.
- The family potentially has the financial resources or possibly connections to 'make things happen'.
- Perhaps the student is especially charismatic and was able to somehow right the situation. Some people have that con-artist mindset where they're able to cheat/commit fraud through their life with seemingly minimal consequences.
- Perhaps they just got lucky and the administration didn't do their due diligence.
> Perhaps they just got lucky and the administration didn't do their due diligence.
Are universities supposed to google every applicant?
I mean I haven't been in academia for a decade, but back when I was I certainly never browsed a 17-year-old girl's instagram before making an admission decision.
It would seem that what was put into the report is clearly wrong (in this case from generative AI, but regardless of where it came from, it would still be wrong), so it is still legitimate to mark those parts as wrong. There are other things too which can be called wrong, whether or not the use of this generative AI is permitted (and it probably makes sense to not permit it in the way that it was used in this instance), so there are many reasons why it should be marked wrong.
However, if the punishment is excessively severe, then the punishment would be wrong.
I'm thirty something. How did my teachers engage me in doing math? How did they engage me in rote-memorizing the multiplication tables when portable calculators were already a thing, being operated by coin-cells or little solar panels?
Part of teaching is getting kids to learn why and how things are done, even if they can be done better/faster/cheaper with new technology or large scale industrial facilities. It's not easy, but I think it's the most important part of education: getting kids to understand the subjacent abstract ideas behind what they're doing, and learning that there's value in that understanding. Don't really want to dichotomize, but every other way kids will just become non-curious users of magic black boxes (with black boxes being computers, societal systems, buildings, infrastructure, supply chains, etc).
The same way you did so before LLMs existed - you rely on in-class assignments, or take-home assignments that can't be gamed.
Giving out purely take-home writing assignments with no in-class component (in an age where LLMs exist), is akin to giving out math assignments without a requirement to show your work (in an age where calculators exist).
Many years before LLMs were ever a thing, I recall being required to complete (and turn in) a lot of our research and outlining in class. A plain "go home and write about X topic" was not that common, out of fear of plagiarism.
Invert the assignment, provide a prompt to supply to an essay writing AI of the students choice, but the assignment is to provide critique for the veracity and effectiveness of the generated essay
"Doe" is actually a real surname, with a few thousand of them in the US. I'd guess that there probably have been people actually named "Jane Doe". I wonder if that causes many problems for them?
I just used chatGPT to code an html/css/JavaScript solution in an hour for coworkers who were having troubles. There were like wow that was fast we were trying to figure this out for a few days. I'm skilled / an expert but that would've taken me many hours vs. a few back n forth with GPT.
Overall my html/css/javascript skills I feel now aren't as valuable as they were.
I guess in this instance I cheated too or is it that my developer peers haven't gotten into using GPT or they are more moral? As well maybe this is just the new normal....
In our field you needed / need to learn new things to stay relevant yet now the new thing does it almost all for you.
As well if one generation is using AI to get things done why wouldn't a younger generation do the same? Do as I say and not as I do.. that never has held well over time.
But you already learned the web stack--school kids haven't. Your mental model is what prepared you to use LLMs well to solve a problem. So if they're going to do as you did, they need to learn the subject first and then learn how to extend their reach with LLMs. Otherwise, they're just cheating in school.
the kids are going to be in a different world than we are. just like it was useful for us to learn a foreign language (still being taught it schools but those days are numbered) for kids these days it is a waste of time (I am sure there are many studies that say being bi/tri/… lingual has benefits beyond communication but you get my point).
I think while we may think “they need to learn the subject first…” do they really? and if they do why? e.g. someone teaching their kid “web development” in soon-to-be 2025 is insane given the tools
we have now… so while there are things for sure kids should learn it is not easy to figure out what those things actually are
No. This attitude of being better than coworkers, coming in and saving the day. It had nothing to do with using AI. It’s about “I am better than you” instead of helping people out, or teaching them these things you know.
It’s just a passing internet comment missing all the context, so what do I know.
My comments are to be controversial… To get people to think… What is the future with AI and using it as such… If I told my coworkers how I achieved it would they not think less present day… What about in a few years or more it's the norm and mine and everyone's HML, CSS, JavaScript skills are less valuable,… this example shows that AI will definitely take peoples jobs, including my own if I do not ramp up my skills
You ramping up your skills will do nothing for you if a machine can otherwise be delegated your job due to the overhead of human worker vs. just owning a machines output. Not having to negotiate is extremely valuable to a business owner. Mark my words. Until people realize that the whole innovation around AI is to sidestep the labor class, things'll continue getting much darker before they brighten.
And the saddest thing is, the fools think it'll work in their favor, and won't blowback with massive unintended consequences.
> The parents' case hangs largely on the student handbook's lack of a specific statement about AI, even though that same handbook bans unauthorized use of technology. "They told us our son cheated on a paper, which is not what happened," Jennifer Harris told WCVB last month. "They basically punished him for a rule that doesn't exist."
I'm going out on a limb here, but if this is the viewpoint of the people who raised him, then I'm not surprised he cheated.
If this was my son and the facts were the same, he'd be grounded in addition to whatever consequence the school deems fit.
What's the relevance? Are you going to embark on a "Let's Educate The Users" mission for parenting?
It would be futile. Parents and children are now united in not wanting to be educated.
SAD!
What is unauthorized use of technology? Is the light I need to read not technology? Is using the internet to find more about a topic not technology? Where is the line that makes AI forbidden?
The lack of implicit or explicit authorization. As the school has lights, you may assume they are authorized implicitly.
This is unproductive and unenlightening pedantry.
I think the throwaway actually raises the valid point about the rule being an exceedingly broad catchall. The type primed for selective and weaponized enforcement.
That said, the kids are clearly defenseless in this situation, for blatant plagiarism as well as just being just being factually incorrect in their report.
> The type primed for selective and weaponized enforcement
Theoretically true, but irrelevant because this particular case isn't that.
Yes, it is broad, and probably a bad rule. That said, there is more than enough than that simple rule in this case that points toward intentional academic dishonesty. If he was my son, getting off on a technicality isn’t exoneration in my house.
The part where students were given explicit guidance on the use of AI resources and told how to cite it appropriately. Besides, even aside from the use of technology it’s still a blatant case of plagiarism as he passed off work he did not write as his own.
Have you actually read the piece? The answers to those is in the written policy the student was given. But even without the policy, it should be pretty clear that passing others' work as your own (be they people or AI) is academic dishonesty.
As judge said, "the emergence of generative AI may present some nuanced challenges for educators, the issue here is not particularly nuanced"
Is what I wrote here mine or not? I used the autocorrect suggestions almost exclusively, wrote few letters only.
Then, no. This isn’t text you generated. No one cares on Internet forums though.
Who came up with the words? If autocorrect is acting as a typist, transferring your words to screen, you are the author.
What if I first asked ChatGPT what should I say? And what's the difference from just copy pasting it?
The question is who comes up with words. If you re-type textbook, you are plagiarizing. Same happens if you re-type ChatGPT output.
On the other hand, if you read some text first (be it ChatGPT's output, or a textbook) and then rephrase it yourself, then you are the author.
How much you have to rephrase? Is changing every other word with synonym enough? That's actually a gray area, and it depends on the teacher. Most teachers would expect you to at least change sentence structure. But in this case it's completely irrelevant, as we know the students did copy/paste.
I really don't see why you are trying to present ChatGPT like something special re plagiarism. Copying other's work is copying. Paying $10 to someone to do your homework and then copying their answer as-is is cheating. So is using ChatGPT yo do it for free.
There is no difference. They’re not your words.
These are games no one in the real world is interested in playing.
In education, the goal is internalizing to the individual the knowledge required for them to bootstrap into a useful, contributing member of society. Things like composition of written works, organizing one's thoughts into communicable artifacts, doing basic mathematics, familiarity with the local and encompassing polity and it's history, how to navigate and utilize institutions of research (libraries) etc... Any technology employed that prevents or sidesteps that internalization is unauthorized.
It ain't that hard to connect the dots unless you're going out of your way to not connect the dots.
Both in and out of education, "it's" always means "it is" or "it has".
Don't think yourself into a hole there bud
The student was not punished for "using AI", but for plagiarism:
>The incident occurred in December 2023 when RNH was a junior. The school determined that RNH and another student "had cheated on an AP US History project by attempting to pass off, as their own work, material that they had taken from a generative artificial intelligence ('AI') application," Levenson wrote. "Although students were permitted to use AI to brainstorm topics and identify sources, in this instance the students had indiscriminately copied and pasted text from the AI application, including citations to nonexistent books (i.e., AI hallucinations)."
so they were caught due to the citations to non-existing books. it seems fine and unconnected to all of the controversial news stories of using AI detection, which has a high rate of false positives
The teacher noticed a few irregularities such as the time the student spent inside the submitted project. The cheater student only had around 55 mins logged whereas everyone else had 7-8 hours. That set off alarm bells for the teacher who then actually looked into the paper more and noticed fake citations and it was flagged as “ai” generated when they ran it through a few ai detectors.
And for AP US History... a college level course. Yikes.
Good. Insofar as the point of a school paper is to make the student do the thinking and writing, taking it from ChatGPT is plagiarizing from OpenAI.
Probably unpopular opinion here but families, usually wealthy, that use the legal system like this to avoid consequences are parasites. It reveals not only your poor job of raising your children. But also the poor character of the parents.
Glad the courts didn’t grant a similar “affluenza” ruling here. The student plagiarized, short and simple.
What's striking to me is that the parents sued. RNH passed off AI-generated text as their own when they knew how to cite AI generated works and were versed in academic integrity. It wouldn't occur to me to sue the school if this was my kid.
They're not optimizing for the kid's education. They optimizing for the credentials the kid is able to get.
Filing the lawsuit is an asymmetric bet:
- win, and increase college admissions odds
- lose, and be no worse off that without the suit
> lose, and be no worse off that without the suit
This kid should change his name, given his initials, high school and parents’ names are public record next to a four brain cell cheating attempt.
Do you think college admissions officers follow the news and use what they learn to maintain a naughty list?
Perhaps a business idea?
Unless he has someone who is very sympathetic to his cause, the teacher/counselor recommendation will wreck him.
This guy needs to go to a JuCo that feeds into a decent state school — he’s screwed for competitive schools.
> Do you think college admissions officers follow the news and use what they learn to maintain a naughty list?
College admissions, no. College students and colleagues and employers, being able to use a search engine, absolutely.
If you search the student's name on Google, you probably won't find this lawsuit.
Admissions know his name and the name of the school, which helps find specific students.
It’s easy to miss, but I wouldn’t be surprised if it comes up as “Hingham High School Harris” brings up the relevant info. Further, his parents suing may be a larger issue for a college than his behavior.
I'm guessing at some point there will be LLMs trawling through news items to put together profiles for people, and as the cost comes down, it won't just be available to three letter agencies and ad platforms, but schools and employers will start to use them like credit scores.
Nope. I just replied above with a similar story when I was in school. My classmate got expelled for cheating and sued the school. tv segment, articles about him, etc.
Zero effect on his college outcomes. Got into really good schools.
> lose, and be no worse off that without the suit
If I were in college admissions then I'd probably think twice about admitting the candidate with a widely reported history of trying to sue their school on frivolous grounds when things don't go their way.
> - win, and increase college admissions odds
Will it, though? Like if the college happens to know about this incident?
It does strike me that the purpose in attending college is the credential you get; education is a far second.
It strikes me that this is a foolish take to adopt.
I saw lots of students acting a bit like this but I was grateful that I could dedicate myself primarily to my schooling and took as much advantage as I could to learn as much as I could.
The credential gets used as a heuristic for the learning you do but if you show up and don't have and knowledge, then everything is harder and your labor more fruitless.
I know some people don't care and that there are degenerate workplaces but you'll still be left with having been a lot less useful in your life than you were capable of being.
So what would you do in the parents' shoes?
> What's striking to me is that the parents sued
And the kid was even offered a redo!
On the other hand, the school caved on National Honor Society after the parents filed. So maybe the best move would have been (tactically, not as a parent) to show the school the draft complaint but never file it.
Almost zero downside. I knew a student who plagiarized 3x so they got kicked out. His parents sued. It was even on the tv news because they were asking for hundreds of thousands in compensation. He lost and the school kept him expelled.
I was expecting the bad press coverage to hurt his college chances since there were several articles online about him getting kicked out for cheating and then suing.
Nope! Dude got into a really good school. He even ended up texting asking me for past essays I wrote to turn in as his own to his college classes.
And the kicker was he then transferred to one of the prestigious military academies that supposedly upholds honor and integrity.
So. There is almost zero downside for suing even if it gets you tons of negative publicity.
I don't think we can claim zero downside from one anecdote. There are always outliers that can occur from extenuating circumstances.
- The family potentially has the financial resources or possibly connections to 'make things happen'.
- Perhaps the student is especially charismatic and was able to somehow right the situation. Some people have that con-artist mindset where they're able to cheat/commit fraud through their life with seemingly minimal consequences.
- Perhaps they just got lucky and the administration didn't do their due diligence.
> Perhaps they just got lucky and the administration didn't do their due diligence.
Are universities supposed to google every applicant?
I mean I haven't been in academia for a decade, but back when I was I certainly never browsed a 17-year-old girl's instagram before making an admission decision.
Not every applicant, but the ones in the accepted pool, it strikes me as odd there isn't some basic amount of vetting.
Instagram? No (although, wouldn't be surprised)... but doing a gut check with the school admin and looking at public records? Sure.
> His parents sued. ...
> He even ended up texting asking me for past essays I wrote to turn in as his own ...
> he then transferred to one of the prestigious military academies ...
>> There is almost zero downside for suing even if it gets you tons of negative publicity.
Sounds like the caveat here should be, "when your parents/family is connected".
Here are some of the case documents:
https://www.courtlistener.com/docket/69190839/harris-v-adams...
ai hallucinated citations to non existent publications.
in this case, the ai should publish the cited hallucinated works on amazon to make it real.
not that it would help us, but the ai will have its bases covered.
Then they could train the next generation of models on those works. Nothing to scrape or ingest, since they already have the text on hand!
It would seem that what was put into the report is clearly wrong (in this case from generative AI, but regardless of where it came from, it would still be wrong), so it is still legitimate to mark those parts as wrong. There are other things too which can be called wrong, whether or not the use of this generative AI is permitted (and it probably makes sense to not permit it in the way that it was used in this instance), so there are many reasons why it should be marked wrong.
However, if the punishment is excessively severe, then the punishment would be wrong.
He didn't get detention for hallucinating facts. He got detention for plagiarizing hallucinations without attribution.
How do you get students to engage in creative writing assignments in age of AI?
How do you get them to dive into a subject and actually learn about it?
I'm thirty something. How did my teachers engage me in doing math? How did they engage me in rote-memorizing the multiplication tables when portable calculators were already a thing, being operated by coin-cells or little solar panels?
Part of teaching is getting kids to learn why and how things are done, even if they can be done better/faster/cheaper with new technology or large scale industrial facilities. It's not easy, but I think it's the most important part of education: getting kids to understand the subjacent abstract ideas behind what they're doing, and learning that there's value in that understanding. Don't really want to dichotomize, but every other way kids will just become non-curious users of magic black boxes (with black boxes being computers, societal systems, buildings, infrastructure, supply chains, etc).
The same way you did so before LLMs existed - you rely on in-class assignments, or take-home assignments that can't be gamed.
Giving out purely take-home writing assignments with no in-class component (in an age where LLMs exist), is akin to giving out math assignments without a requirement to show your work (in an age where calculators exist).
Many years before LLMs were ever a thing, I recall being required to complete (and turn in) a lot of our research and outlining in class. A plain "go home and write about X topic" was not that common, out of fear of plagiarism.
Invert the assignment, provide a prompt to supply to an essay writing AI of the students choice, but the assignment is to provide critique for the veracity and effectiveness of the generated essay
Discussed before the ruling:
https://news.ycombinator.com/item?id=41861818
This would have been illegal in Italy has their 1970 Worker Protection against automated management would kill this AI.
One of the hallucinated authors is literally named "Jane Doe". Our society is about to become powerfully stupid.
"Doe" is actually a real surname, with a few thousand of them in the US. I'd guess that there probably have been people actually named "Jane Doe". I wonder if that causes many problems for them?
[flagged]
DoE is the department of energy. The department of education is ED.
I laughed out loud when I saw that McMahon was his pick. A fucking wrestling star for the department of education. This is Idiocracy.
Also I laughed because otherwise the fear takes over.
In legal cases that is how one can choose to remain anonymous.
See, there's stuff even geniuses dont know.
Why do you think the previous poster found that name notable? Just because it's inherently funny sounding or something?
That's not relevant to this. It's a direct quote from the work the students handed in.
The parents seem absolutely unhinged.
Poor kid.
Yet another “affluenza” raised child joining the ranks of society. Probably will become a future C-level exec at an American company.
AI is the new calc
I just used chatGPT to code an html/css/JavaScript solution in an hour for coworkers who were having troubles. There were like wow that was fast we were trying to figure this out for a few days. I'm skilled / an expert but that would've taken me many hours vs. a few back n forth with GPT.
Overall my html/css/javascript skills I feel now aren't as valuable as they were.
I guess in this instance I cheated too or is it that my developer peers haven't gotten into using GPT or they are more moral? As well maybe this is just the new normal....
The rules for working are very very different from being at school.
No you were not cheating, you did what was expected from you. But you knew that.
How so and or AI is changing the rules everywhere no? Today it seems not good yet tomorrow it's how things are...
The goals are very different. It was like this also before AI.
The goal in school is to learn things. To learn to write you can't just copy an article from a paper and say it is yours. You have not learned.
At work, the goal is to get things done.
In our field you needed / need to learn new things to stay relevant yet now the new thing does it almost all for you.
As well if one generation is using AI to get things done why wouldn't a younger generation do the same? Do as I say and not as I do.. that never has held well over time.
But you already learned the web stack--school kids haven't. Your mental model is what prepared you to use LLMs well to solve a problem. So if they're going to do as you did, they need to learn the subject first and then learn how to extend their reach with LLMs. Otherwise, they're just cheating in school.
the kids are going to be in a different world than we are. just like it was useful for us to learn a foreign language (still being taught it schools but those days are numbered) for kids these days it is a waste of time (I am sure there are many studies that say being bi/tri/… lingual has benefits beyond communication but you get my point).
I think while we may think “they need to learn the subject first…” do they really? and if they do why? e.g. someone teaching their kid “web development” in soon-to-be 2025 is insane given the tools we have now… so while there are things for sure kids should learn it is not easy to figure out what those things actually are
Yeesh this is full of red flags…
What is..the new normal of using AI to do or help you get your job done and or quicker? Comment above shows it could be the new normal...
No. This attitude of being better than coworkers, coming in and saving the day. It had nothing to do with using AI. It’s about “I am better than you” instead of helping people out, or teaching them these things you know.
It’s just a passing internet comment missing all the context, so what do I know.
My comments are to be controversial… To get people to think… What is the future with AI and using it as such… If I told my coworkers how I achieved it would they not think less present day… What about in a few years or more it's the norm and mine and everyone's HML, CSS, JavaScript skills are less valuable,… this example shows that AI will definitely take peoples jobs, including my own if I do not ramp up my skills
You ramping up your skills will do nothing for you if a machine can otherwise be delegated your job due to the overhead of human worker vs. just owning a machines output. Not having to negotiate is extremely valuable to a business owner. Mark my words. Until people realize that the whole innovation around AI is to sidestep the labor class, things'll continue getting much darker before they brighten.
And the saddest thing is, the fools think it'll work in their favor, and won't blowback with massive unintended consequences.