this post was submitted on 10 Jul 2023
4 points (100.0% liked)

Actually Useful AI

1997 readers
1 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @[email protected] if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

Hello everyone, welcome to this week's Discussion thread!

This week, weโ€™re focusing on using AI in Education. AI has been making waves in classrooms and learning platforms around the globe and weโ€™re interested in exploring its potential, its shortcomings, and its ethical implications.

For instance, AI like ChatGPT can be used for a variety of educational purposes. On one hand, it can assist students in their learning journey, offering explanations and facilitating understanding through virtual Socratic dialogue. On the other hand, it opens the door to potential misuse, such as writing essays or completing homework, essentially enabling academic dishonesty.

Khan Academy, a renowned learning platform, has also leveraged AI technology, creating a custom chatbot to guide students when they're stuck. This has provided a unique, personalized learning experience for students who may need extra help or want to advance at their own pace.

But this is just the tip of the iceberg. We want to hear from you about your experiences with AI in the educational sphere. Have you found an interesting use case for AI in learning? Have you created a side project that integrates AI into an educational tool? What does the future hold for AI in education, in your view?

Looking forward to your contributions!

all 8 comments
sorted by: hot top controversial new old
[โ€“] [email protected] 5 points 1 year ago (1 children)

I forget where I saw it now, but I ran across a story wherein a teacher gave an assignment to get ChatGPT to write an essay on whatever subject the students were learning and then the students were to write an essay on the accuracy and inaccuracy of the ChatGPT essay. I thought that was pretty genius.

[โ€“] mabcat 2 points 1 year ago (1 children)

The genius move is to get ChatGPT to write the essay and the critique. I don't even have to try this, to know the output would be better quality than a student's own critique. From a teaching perspective the worst thing about this is the essay and critique would both be full of subtle errors, and writing feedback about subtle errors takes hours. These hours could have been spent guiding students who did work and actually have subtle misunderstandings.

[โ€“] [email protected] 2 points 1 year ago

I don't think that's necessarily fair or the point. Usually the point of essays are to get students to think critically about the subject, derive some conclusions and demonstrate evidence to make their points. I think the idea of having students critique an A.I driven essay begins to remove some of the "middle man" of content generation in essay writing, but still gets the student to think about the subject, gather some perspective and ideally look into evidence to support said perspective.

To add that I don't think the goal is to write "perfect" critiquing feedback that's free from errors. Errors are also part of the learning process :)

[โ€“] sisyphean 3 points 1 year ago* (last edited 1 year ago)

Ethan Mollick has two recent articles related to this topic:

[โ€“] varsock 2 points 1 year ago* (last edited 1 year ago)

It's been a while since I was in a formal classroom setting but as an engineer I'd assert that I'm constantly learning. So I'll offer my perspective on AI in education for those continuing education.

I find myself taking more risks at work and in my personal projects in experimenting with new technology and languages. AI's shortcomings grow exponentially as technical complexities of the prompt grow linearly, but for a beginner getting their feet wet in a subject it lowers the bar considerably in entry to the topic. I found I am not plagued so much by "analysis paralysis" after reading blogs and tutorials that are written by authors with varied understanding in the subject. With a few prompts I can effectively "filter down" the topics I need to read more about to produce something useful. No more fear that "what if i missed something."

Then there's the aspect of creating a tailored refresher for yourself on a class of "Stuff you have to relearn every time you have to use it" (love that comment). Or asking an AI to explain what a piece of content means. For example, if you wrote a really complex Makefile, dumping a tree of the repository and asking the AI to expand all the variables in the Makefile, I can now read what every step is doing.

But you definitely hit the nail on the head with pointing that "it opens a door to potential misuse". I become dependent on it for doing some tasks that I will only otherwise learn by doing. And in the context of data storage, in some ways I become less efficient and more error prone because I no longer access the knowledge I have cached in memory (my brain) and instead access data on disk (taking the time to ask an AI) that can retrieve incorrect data (data corruption, bad sectors, etc) that are difficult to catch.

As an educational tool, I think those that behave as "AI gluttons" and overindulge in use of AI to the point of excess or greed, risk eroding their critical thinking and creativity. And those that do not supplement their learning with AI risk of being left behind by those who use it responsibly. In the same way I think AI will not replace programmers, but programmers that use AI will replace programmers that don't.

[โ€“] mabcat 2 points 1 year ago

"Potential misuse" is a bit of a weasel phrase... student use of AI assistants is rampant, the ways they use them are almost always academic misconduct, so it's actual misuse.

Our institution bans use of AI assistants during assessments, unless permitted by a subject's coordinator. This is because using ChatGPT in a way that's consistent with academic integrity is basically impossible. Fixing this means fixing ChatGPT etc, not reimagining academic integrity. Attribution of ideas, reliability of sources, and individual mastery of concepts are more important than ever in the face of LLMs' super-convincing hallucinations.

There are no Luddites where I teach. Our university prepares students for professional careers, and since in my field we use LLMs all day long for professional work, we also have to model this for students and teach them how it's done. I demonstrate good and bad examples from Copilot and ChatGPT, quite frequently co-answer student questions in conversation with ChatGPT, and always acknowledge LLM use in materials preparation.

I also have a side project that provides a chat interface to the subject contents (GPT4 synthesis over a vector store). It dramatically improves the quality of AI assistant answers, and makes it much easier to find where in the materials a concept was discussed. Our LMS search sucks even for plain text content. This thing fixes that and also indexes into code, lecture recordings, slides, screenshots, explainer videos... I'm still discovering new abilities that emerge from this setup.

I think the future is very uncertain. Students who are using ChatGPT to bluff their way through courses have no skills moat and will find their job roles automated away in very short order. But this realisation requires a two-year planning horizon and the student event horizon is "what's due tomorrow?" I haven't seen much discussion of AI in education that's grounded in educational psychology or a practical understanding of how students actually behave. AI educational tools will be a frothy buzzword-filled market segment where a lot of money is made/spent but overall learning outcomes remain unchanged.