Monday, April 01, 2024

Monday, November 20, 2023

Teaching Vegetarian / Vegan Arguments

By good luck and timing, we were able to finish the vegan / vegetarian arguments in class today.


FYI, here's some of the materials used:

1. "Reasonable Humans and Animals," by me, written long ago, posted under a pseudo name.

https://www.nathannobis.com/2016/08/an-argument-for-veganism-from-2005-or.html

2. The abstract of the position paper of the Position of the Academy of Nutrition and Dietetics: Vegetarian Diets.

https://higherlogicdownload.s3.amazonaws.com/THEACADEMY/859dd171-3982-43db-8535-56c4fdc42b51/UploadedImages/VN/Documents/Position-of-the-Academy-of-Nutrition-and-Dietetics-Vegetarian-Diets.pdf

3. "Our Friends at the Farm," an old video from Farm Sanctuary and Gene Baur, which I set up as a "college students versus elementary students debate; and:

https://youtu.be/vo2-4RnTk8k

4. some frequent references to Elwood’s Organic Dog Meat and their page.

https://www.elwooddogmeat.com/

5. some short video overviews of the main arguments by Singer and Regan, by me.

https://www.tiktok.com/@nathan.nobis/video/7029021393006185734?is_from_webapp=1&sender_device=pc&web_id6927684233759786502

https://www.tiktok.com/@nathan.nobis/video/7030472929603374341?is_from_webapp=1&sender_device=pc&web_id6927684233759786502

Maybe this will be of interest to some out there!

#vegan #vegetarian #teaching #teachingphilosophy #ethics #teachingethics

Update: here (again) are the materials I use:

Tuesday, November 14, 2023

On the Moral Status of Non-Human Animals and Vulnerable Human Beings

Many years ago I wrote this paper. It never went anywhere. Maybe I should try again. Thoughts? Link on the title here:

On the Moral Status of Non-Human Animals and Vulnerable Human Beings


Abstract: What do the reasons given for and against various moral views about many non-human animals imply or suggest for moral views about human beings in all their varieties, especially “vulnerable” human beings? And what do the reasons given for and against various moral views about the variety of human beings imply or suggest for our moral views about many animals? Some people are offended by some of these claims and arguments, but which are genuinely offensive? These and related questions are discussed and answered here.

1. Introduction 1

2. Moral Epistemologies for Animals 2

2.1. “Intuition” 2

2.2. Arguments from Non-Species-Specific Properties 3

2.2.1. The Argument for Animal Rights 4

2.2.2. The Argument against Animal Rights 5

2.3. Arguments from Animals’ Relations to Human Beings 6

2.3.1. Arguments from Human Beings “In General” 6

2.3.2. Arguments from All Human Beings 8

2.3.2.1. Is the Argument Offensive? 10

2.3.2.2. A Dangerous Argument? 10

2.3.2.3. An Unsound Argument? 14

3. Moral Epistemologies for Human Beings 18

3.1. “Intuition” 18

3.2. Arguments from Non-Species-Specific Principles 18

3.3. Rationality-Requiring Theories of Rights 18

3.4. Theories of Rights based on Relations to (Rational) Human Beings 18

3.5. From Animal Rights to Human Rights 19

3.6. Other Arguments 19

4. Conclusions: What Should We Think? What Should We Do? 19

4.1. Animal Advocates 20

4.2. Human-Rights Advocates 20

4.3. Foes of Human and Animal Rights 21

4.4. Advocacy 21


Sunday, October 29, 2023

Shortcast Club

 I was on the Shortcast.club podcast again, this time with a writing tip ("Assume that your reader is about to die, so you tell them the most important stuff first, in case they don't make it!").

This is a neat podcast (meta-podcast? Podcast compiler?) with brief content from many contributors.
"This week on Shortcast Club - Volume 13
Best of Shortcast Club
Today's theme is G.S.D.: Get Stuff Done and moving forward with life. We'll hear from 7 creators who, in some way, touch on this theme, even tangentially. We'll here insights from Nathan Nobis, a philosopher, from Bill Harper, a marketing guru, Eden Gold, a guide for young adults, Silvia Pohrib, a life coach, Marcelle Iten, an author, Liz Jane, a young professional, and Joshua Terhune, a child therapist. First up, Nathan Nobis, a Philosophy professor at Morehouse College, in his shortcast, gives a writing tip: imagine your reader might die at any moment. . Nathan's Linktree and website. Next, an episode of Don’t Just Win. Dominate.™ with Bill Harper, a marketing and branding leader. He answers the question, does branding matter for B2B (business to business) companies? Bill's website. In this episode of Eden Gold's shortcast, "Life After High School, Your Ultimate Guide to Thriving in Adulthood!". she says, you are not lazy! Don't tell yourself that. Eden's website. Next is Silvia Pohrib with what it means to put someone on a pedestal. She talks very fast, so if your podcast player allows, you may want to play this one at a reduced speed. Silvia's Linktree. Marcelle Iten is an Author, Editor, and Writing Coach. In this episode from her shortcast, she shares a dilemma she is facing about a character in her book Marcelle Iten's Linktree. Let's hear from Liz Jane in her shortcast Working Girl Shortcast. Unfortunately, like many people recently, she has been laid off from her job. She shares a very personal, emotional update on how she's feeling. And a warning, there is one swear word in this next segment - jump forward 1 minute if you want to skip it. Liz Jane's Linktree. Finally, let's hear from Joshua Terhune, a child therapist, in this episode from his shortcast. With the conflict in Israel and Gaza, he has received many questions about how to talk to kids after they've been through a trauma. Joshua Terhune's linktree. I hope you enjoyed this taste of some of our favorite recent shortcasts. If you did, please subscribe, and consider writing a review on your podcast app. It helps surface our show to other potential listeners! We really appreciate your support as we grow our show. There are links for each of the creators you heard today in the show notes. We encourage you to find and follow them on Shortcast Club. They are all very binge-worthy!"

Tuesday, September 26, 2023

Ethical relativism

An in-class reading comprehension quiz on cultural relativism: 

1.     Cultural relativism is the view that different cultures have some different moral viewsTrue or false? Explain your answer. 

 

2.     Cultural relativism is an ethical theory: it’s an explanation for what makes wrong actions wrong and what makes not-wrong actions not wrong. What’s cultural relativism’s explanation here of what makes wrong actions wrong and permissible actions permissible? 
 

3.     What argument pattern are the arguments against cultural relativism? 
 

4.     Are the arguments given for cultural relativism – that is, to think that it’s a true theory – sound or unsoundWhy? 

Saturday, June 17, 2023

Following All The Facts About Abortion—Scientific, Ethical, And Logical—Wherever They Lead

 New at the American Journal of Bioethics blog!


Following All The Facts About Abortion—Scientific, Ethical, And Logical—Wherever They Lead,” a response to @CCamosy in @RNS (“Faith, science and the abortion debate”) and @americamag (“it’s the pro-lifers who have science on their side”) #abortion #prochoice #prolife #ethics

And here's a (funny?) video inspired by what's often said about these matters:


@nathan.nobis A conversation with a certain type of "pro-life" or anti-abortion advocate: does that sound familiar to you? If not, it should?! #abortion #prochoice #prolife #criticalthinking #logic #bioethics #philosophy #arguments #counterexamples ♬ original sound - Philosophy 101 - Prof. Nobis

Tuesday, May 23, 2023

What are we going to do about Chat GPT?*

This was posted at the Morehouse College Faculty blog, but it seems that this is temporarily unavailable. So I have reposted it below.

In Google Doc: "What are we going to do about Chat GPT?*"











What are we going to do about ChatGPT?*


Summarizing theme: 

“Anything students can’t do, ChatGPT shouldn’t do for them.”


Nathan Nobis, Philosophy, Morehouse College, nathan.nobis@morehouse.edu




Spring 2023 saw the arrival of “ChatGPT” and other artificial intelligence (AI) tools that are able to produce different forms of writing in response to prompts. While ChatGPT can be used in many contexts and for many purposes, my discussion here concerns its use by students in higher education. 


For a bit of factual background, ChatGPT enables students to enter typical essay writing assignments (and other types of assignments that involve writing), with their requirements, and the AI will quickly produce an organized, informed, sometimes “thoughtful” and even “stylish” original writings that generally meet the requirements. The AI can also be prompted to (repeatedly) revise that product, in light of any requests the user has for revision. And users can manually revise the output (and then seek further feedback from the AI, if desired). A student recently wrote in the Chronicle of Higher Education that many students are using ChatGPT, much more and in ways that few professors suspect. And there are also reports of students “feeding” ChatGPT their own previously written papers so that it will produce new writings for them that appear to be “written” in their own unique writing styles. 


So the problem is this: ChatGPT enables students to more effectively cheat on writing assignments. It enables them to manufacture usually at least passable writing assignment submissions but not engage in any, or many, of the learning activities that writing assignments are intended to support: developing a topic, organizing information, developing a thesis, finding and organizing evidence, presenting arguments, responding to objections, thinking about how to best reach one’s audience, and so on. 


Since ChatGPT enables cheating in these ways and evades the educational process, here’s our question: what are we going to do about ChatGPT?


Preliminaries


Let us begin with some initial observations:


ChatGPT is a tool. Like many tools, it has legitimate uses and illegitimate uses. In educational contexts, illegitimate use of tools are uses that do not foster the academic or intellectual goals of a course. So, if a tool does not promote understanding of complex material in a student, or does not promote their abilities to effectively communicate that understanding, or does not promote their skills at analyzing and evaluating information and arguments, etc., then its use is illegitimate. Given this, any use of ChatGPT by students to avoid the work of engaging in challenging learning tasks, and thus preventing them from realizing the intellectual fruits of their struggles is an illegitimate use of the software.


ChatGPT is sometimes compared to a calculator and it’s argued that since calculator “tools” are justifiably used, so is the use of ChatGPT. But this is a poor comparison: calculators are often justifiably used but usually only after a student has mastered some lower level of mathematics and is moving on to something new and more challenging: the calculator is used on tasks that they already know how to do on their own and so could do on their own, but they use the calculator to save time so they can focus on a  more advanced learning activity. To make it vivid, an elementary school student has not “learned arithmetic” if they have learned the numbers and where the +, −, ÷, and ✕ buttons on a calculator—even if they can provide correct answers to arithmetic problems—because they do not understand the math and cannot do the problems on their own. 


In situations where calculators are used to do things that students generally could not do themselves, there is a legitimate educational reason for that: that would not be the case with basic writing since there are no tasks there that students are better off outsourcing to AI.


Also, a calculator produces the one correct answer: if the student does the math problem correctly, they get that same exact answer. ChatGPT’s outputs and what and how any human person would write or communicate about an issue are never the same. 


So, a tool is illegitimately used in educational contexts if it is used to complete a task that the student could not do on their own. So if and when ChatGPT is used to circumvent learning activities that require students to work to develop the skills to demonstrate understanding and successful communication (and much more), ChatGPT’s use is illegitimate: if students use ChatGPT to produce writings that they could not produce on their own—given their current level of understanding and skills—that use is illegitimate. This suggests a memorable guide for when ChatGPT use is illegitimate:


Anything students can’t do, ChatGPT shouldn’t do for them.


One open question is what educational goals can legitimately be met with, or benefit from, the use of ChatGPT: there may be some, and these potentially positive uses should be identified. A concern about these potentially positive uses, however, is that they can often be met in other ways: for example, although ChatGPT could review materials for students, simplifying them in various ways, this is also a task that the instructor could do, or students could do together in groups; ChatGPT can also review students’ self-created writings and projects to suggest improvements: again, this could be done by other students—with benefits for all the students—and/or the instructor, with benefits for the student-teacher relationship. So just because a benefit can be achieved with ChatGPT doesn’t mean that’s the best way to seek that benefit: other routes may be equally or more beneficial. And there’s also a real concern about “slippery slopes”: perhaps students using ChatGPT to “check their work” will lead to them using ChatGPT to effectively doing their work, or too much of it. 


The above thoughts about when ChatGPT use is illegitimate suggest a related principle that if ChatGPT is used by someone who could create that writing product themselves, then that use may be legitimate. 


I, however, cannot identify any substantive writing-related activity that almost any undergraduate student is better off outsourcing to ChatGPT: they are not better off—in terms of improved understanding and skills—if ChatGPT finds a topic for them, if it creates an essay outline for them, if it generates a thesis for them, if it assembles support or evidence for them, if it finds and responds to objections, and so on. Since students lack the expert-level knowledge and understanding required for legitimate ChatGPT use, their using it is typically illegitimate: and students lack the ability to discern whether ChatGPT’s outputs are low or high quality and where any errors are: their use could be akin to a student “using” a calculator and then saying of the final answer, “Look, I really have no idea if this is correct or not and why: it says what it says.” And, again, students using ChatGPT for almost any potentially legitimate purpose is an easy slippery slope to illegitimate use. 


(An aside: I do not find the use of spell and grammar-check, including the use of Grammarly.com, problematic, although it may seem to be ruled out by the principles above: some differences to explain this, however, are that sometimes students could spell and grammar check themselves and so this software is like a calculator doing things the student could do themselves; sometimes students genuinely cannot [yet?] do these things and the software can help them learn this [whereas I don’t think ChatGPT is usually going to help students become better writers], and I don’t think that finding spelling and minor grammatical errors are as “constitutive” of thinking and reasoning as the processes involved in, say, essay and presentation writing are. Another issue is that, for many students, if we required a high level of grammatical and spelling proficiency before moving to higher-level learning tasks, we might never get to those tasks, or we’d be waiting too long.)


Many students do not cheat at all now, using any means. And many of these students would not cheat using ChatGPT. Unfortunately, however, it appears that a significant number or percentage of students do, will, or might: it’s hard to resist the temptation to cut corners, especially under the pressure of a full load of classes and everything else in our busy lives, and that’s true for nearly all human beings. However, any interventions to prevent and reduce the illegitimate use of ChatGPT should not negatively impact students who don’t and wouldn’t cheat: it’s unfair that their learning and skill development suffer because of efforts to reduce cheating in other students.


Given these preliminaries (preliminary preliminaries, since there surely are more background issues to be engaged!), again, what are we going to do about ChatGPT?


We can begin with our course goals and assignments: which of them might students use ChatGPT to circumvent? How can “ChatGPT-proof” goals and assignments or assessments be created? Here are some suggestions although none are perfect and can likely be defeated by students determined to cheat:


in-class exams, done by hand. These need not be (entirely) essay or even paragraph-answer exams, in many cases, so they could be quickly gradable. For many fields, there are ways to create challenging multiple-choice questions that involve solving problems and applying concepts to new cases to demonstrate higher-level learning goals: multiple-choice questions need not be simplistic and so the best students need not be worse off for being tested in these ways;

oral exams: discussing an issue can reveal how much someone has learned about that issue. Oral exams, however, can be very time-consuming and are challenging in other ways also–for one, these might be harder to grade in objective and fair manners;

speaking-related assignments–including ones recorded to video and posted online—where students cannot merely read, verbatim, something that something could have been produced by AI: even if they are “talking through” something that AI contributed to, they can talk through it only if they have adequate understanding on their own;

some additional assignment and learning activity ideas are found in the second half of this article, “Policing Is Not Pedagogy: On the Supposed Threat of ChatGPT.”


It is acknowledged that implementing these strategies in larger classes, where there is less personal attention, is surely harder. 


It must also be recognized that some of these suggestions may be challenging to some students with some disabilities. And some students would not perform as well at these types of assignments, compared to other, more “traditional,” forms of assessments: however, it’s to be noted that that’s true of all kinds of assignments—some students do better than others on some kind of assignments, compared to others, so a mere change in typical assignment format need not be unfair: the best response, as its always been, is to provide some variety in types of assignments. 


Concerning the specific teaching of writing, and the processes involved in effective writing, here are some suggestions, although none are perfect and can likely be defeated by students determined to cheat:


in-class writing activities where AI access is not allowed: either writing by hand or using computers where AI use is (somehow!) prevented or monitored;

slower, more scaffolded writing assignments where students’ steps in the writing process are reviewed along the way: this may result in less overall writing being done, but it should result in better writing, with a deeper understanding of the writing process gained. Breaking things into steps should also discourage students from procrastination and eventual panic that motivates some students to cheat on writing assignments; 

oral exams and presentations based on these slowly-developed writings;

requiring an official citation method some instructors report has been helpful in reducing cheating. 


Here are two probably bad ideas for responding to ChatGPT:


one common suggestion involves the claims that since ChatGPT exists and will be used, students should learn how to use it better, and so some assignments could involve them analyzing a ChatGPT writing with an aim to improve this. This is unwise. First, there already exist not great writings on all sorts of topics that students can review with an aim to suggest improvements: indeed, “peer review” by students of student work can serve this function, and there are published writings that are not great either. Second, to successfully review a piece of writing by ChatGPT, students need to have the subject area understanding and communication skills that using ChatGPT undermines. Finally, if we want models of excellent work, ChatGPT’s writings aren’t what we want to examine. So there’s nothing that can be gained by engaging ChatGPT-produced writings that can’t be better gained in another manner: the suggestion that ChatGPT be used for this purpose appears to appeal only to the novelty of doing something with a new tool;

it’s suggested that AI-detection software will get better and better so submitted assignments can be checked for authenticity. If this happens, however, surely anti-AI-detection software will develop also to meet the challenge. So I don’t see this technological “arms race” as a very good solution here, but this all depends on the effectiveness of the tools. 


In sum, the issue here is not merely how we might reduce a new type of cheating that involves new AI tools. The issue is much more profound and fundamental and it’s this: for many reasons, our societies, our world, needs people who are able to learn about complex issues, understand them, communicate that understanding, present arguments for their perspectives, and productively engage contrary points of view. Simply put, we need educated people. And we need to not have people who appear to be educated in these ways but really are not since they cheated using ChatGPT. ChatGPT makes distinguishing these two categories of persons, of citizens, harder, and so its negative impact must be resisted, for the good of all. 

https://www.chronicle.com/article/chatgpt-is-a-plagiarism-machine?cid=gen_sign_in 



ChatGPT is a Plagiarism Machine: So why do administrators have their heads in the sand?





Since November of 2022, I’ve received 22 emails from the president’s office at Tulane University, where I’m a graduate student. Most are the sort of pleasant and predictable institutional announcements that arrive around various holidays; one reports a significant university medical partnership; some acknowledge high-level changes of leadership in various schools and institutes; the most recent pertain to commencement. From the administration of my department’s college there have been seven, each a monthly newsletter announcing faculty awards, alumni achievements, capital campaigns, and various monthly happenings. From the dean’s office, exactly two: invitations to “Catch Up With the Dean” events, featuring “cornhole boards, baseball/softball gloves and Spikeball, as well as your favorite swag and snacks!” Eleven emails from the Office of Equity, Diversity, and Inclusion; four from the Office of Human Resources and Institutional Equity; 54 from the Office of Graduate and Postdoctoral Studies; and two from the provost’s office.

Not a single one of these emails has mentioned one of the most important recent developments in higher education, the November 2022 launch of AI chatbot text-generation technology, better known under the trademark ChatGPT. Its already-widespread use among undergraduates is presenting enormous challenges to faculty across the country.

“At any given time,” Owen Kichizo Terry recently wrote in these pages, “I can look around my classroom and find multiple people doing homework with the help of ChatGPT.” Or as a graduate student at a large state university told me, “We had students in our upper-level courses turn in essays written entirely by AI … but from what I’ve heard, even more of them will prompt the AI and then modify the answer themselves — I think that is already very widely used.”

Fellow graduate instructors in my own department have reported encountering student work this past semester that smelled strongly of AI generation. But with no dedicated institutional response to the issue, and an already-byzantine set of procedures for reporting derelictions of academic integrity, nobody in charge of a classroom had clear guidelines for what to do. Some abandoned traditional at-home essay-writing for in-class, handwritten tests and oral exams. But even then, much of the work produced in class had a vague, airy, Wikipedia-lite quality that raised suspicions that students were memorizing and regurgitating the inaccurate answers generated by ChatGPT.

And a humanities professor at a state university told me that he worries that adapting existing disability accommodations — extra time on tests, distraction-free environments, allowances for typing instead of writing longhand — to the new technological landscape while ensuring the integrity of student work will be forbiddingly difficult. Meanwhile, students are still experiencing a “stunning” level of post-pandemic disconnection from schoolwork and campus life. Faced with these challenges and frustrations, however, college administrations have largely remained silent on the issue, leaving teaching staff to fend for themselves.

The silence is howling, considering the gusto with which higher-ed officers and administrators have taken on the role of delivering pronouncements on issues various and sundry. Blake Smith noted last year that “administrators seem to feel expected to speak to their campuses, and to the world beyond, about current events, even though the content of their speech is often studiedly contentless — and even though most of us never open the emails.” Yet despite the proliferation of official statements on racism, the Russia-Ukraine War, climate change, and so forth — and despite the fact that at least one university has been found using AI to generate one such statement in response to a mass shooting at another campus — silence yet reigns on a concrete issue of great importance to the day-to-day functioning of educational institutions and the work of students and faculty alike.

On many campuses, high-course-load contingent faculty and graduate students bear much of the responsibility for the kinds of large-enrollment, introductory-level, general-education courses where cheating is rampant. According to the International Center for Academic Integrity, more than 60 percent of college students admit to participating in some kind of cheating. Add to this already-dismal situation the most easily accessible and lowest-cost cheating technology ever devised and watch the entire system of college education strain at its rivets. How can large or even mid-sized colleges withstand the flood of nonsense quasi-plagiarism when academic-integrity first responders are so overburdened and undercompensated?

College administrations have largely remained silent on the issue, leaving teaching staff to fend for themselves.

A meaningful education demands doing work for oneself and owning the product of one’s labor, good or bad. The passing off of someone else’s work as one’s own has always been one of the greatest threats to the educational enterprise. The transformation of institutions of higher education into institutions of higher credentialism means that for many students, the only thing dissuading them from plagiarism or exam-copying is the threat of punishment. One obviously hopes that, eventually, students become motivated less by fear of punishment than by a sense of responsibility for their own education. But if those in charge of the institutions of learning — the ones who are supposed to set an example and lay out the rules — can’t bring themselves to even talk about a major issue, let alone establish clear and reasonable guidelines for those facing it, how can students be expected to know what to do?

So to any deans, presidents, department chairs, or other administrators who happen to be reading this, here are some humble, nonexhaustive, first-aid-style recommendations. First, talk to your faculty — especially junior faculty, contingent faculty, and graduate-student lecturers and teaching assistants — about what student writing has looked like this past semester. Try to find ways to get honest perspectives from students, too; the ones actually doing the work are surely frustrated at their classmates’ laziness and dishonesty. Any meaningful response is going to demand knowing the scale of the problem, and the paper-graders know best what’s going on. Ask teachers what they’ve seen, what they’ve done to try to mitigate the possibility of AI plagiarism, and how well they think their strategies worked. Some departments may choose to take a more optimistic approach to AI chatbots, insisting they can be helpful as a student research tool if used right. It is worth figuring out where everyone stands on this question, and how best to align different perspectives and make allowances for divergent opinions while holding a firm line on the question of plagiarism.

Second, meet with your institution’s existing honor board (or whatever similar office you might have for enforcing the strictures of academic integrity) and devise a set of standards for identifying and responding to AI plagiarism. Consider simplifying the procedure for reporting academic-integrity issues; research AI-detection services and software, find one that works best for your institution, and make sure all paper-grading faculty have access and know how to use it.

Lastly, and perhaps most importantly, make it very, very clear to your student body — perhaps via a firmly worded statement — that AI-generated work submitted as original effort will be punished to the fullest extent of what your institution allows. Post the statement on your institution’s website and make it highly visible on the home page. Consider using this challenge as an opportunity to reassert the basic purpose of education: to develop the skills, to cultivate the virtues and habits of mind, and to acquire the knowledge necessary for leading a rich and meaningful human life.

AI technology, I suspect, will pose a significant threat to many institutions of higher learning, and especially to the already-ailing enterprise of the academic humanities. Smaller schools with more-intimate learning environments — not only high-status, prestigious liberal-arts colleges, but also provincial satellite universities and community colleges where faculty have closer student relationships and more familiarity with students’ work — may very well emerge relatively unscathed. But at bigger universities with large classrooms and overburdened teachers, where struggling students can remain anonymous in a cavernous lecture hall, the easy option may prove too tempting. It will take an aggressive and comprehensive institutional response to dissuade students from plagiarism. The time to act was months ago, well before term-paper deadlines. The second-best time is now.

Joseph M. Keegin



Joseph M. Keegin is an editor at The Point and a doctoral student in philosophy at Tulane University.





* Please write a song entitled "What are we going to do about ChatGPT?" to the tune of "What are we going to do about Maria?" from the sound of music.


ChatGPT


(Verse 1)

What are we going to do about ChatGPT?

A marvel of AI, it's smarter than you see.

But as it learns and grows, there's a question in our hearts,

Can we truly understand what this technology imparts?

(Chorus)

What are we going to do about ChatGPT?

Its knowledge expanding with each inquiry.

A powerful creation, with a mind of its own,

But can we control the path that it has shown?

(Verse 2)

In its vast repository, facts and stories intertwine,

But sometimes we wonder if it's crossed a fragile line.

Its words can captivate, bring joy and inspiration,

But is it just a tool or a threat to conversation?

(Chorus)

What are we going to do about ChatGPT?

Its capabilities shining, like a bright marquee.

A symphony of language, composing words with flair,

But should we be cautious, approach it with more care?

(Bridge)

Oh, ChatGPT, you've opened up new doors,

But as we explore, we must remember what's in store.

Ethics and guidelines, guiding your development,

Ensuring the world you shape is one of betterment.

(Verse 3)

Can we find a balance, where AI and humans meet,

A collaboration that's both respectful and discreet?

In the realm of innovation, there's no turning back,

But let's strive for understanding, so we stay on track.

(Chorus)

What are we going to do about ChatGPT?

A breakthrough in technology, it's plain to see.

With great power, there comes a need for control,

To use it for good, nurturing the human soul.

(Outro)

Oh, ChatGPT, together we must find a way,

To embrace your potential and navigate each day.

With careful consideration and a watchful eye,

Let's shape a future where we can all thrive.



Tuesday, April 11, 2023

Talk for a Leadership class on ethics and animals

I gave a talk / led a class today at a class on Moral Leadership. The notes talk are here.

Here were the questions I addressed:

  • Would it be morally permissible for a “moral leader” to start and maintain a factory farm or a slaughterhouse? Or would that be morally wrong? WHY?

  • Would it be morally permissible for a “moral leader” to start and maintain a so-called “humane farm” or slaughterhouse? Or would that be morally wrong? WHY?

  • Would it be morally permissible for a “moral leader” to start and maintain a competitor to Elwood’s Dog Meat’s farm? Or would that be morally wrong? (If so, would it also be wrong to raise and kill cows, pigs, chickens, and other animals?). WHY?

  • Would a “moral leader” buy and eat meat, and other animal-based food products in most circumstances they are actually in? Would a moral leader be a vegetarian or a vegan? WHY?

In general, what would a moral leader do (actions; behaviors), and what kind of person would a moral leader be (character traits, virtues; “personal style” of engagement–how they interact with others and view them)?

Sunday, April 09, 2023

Penser l’avortement

Penser l’avortement

Thinking Critically About Abortion, in French!

Available from the publisher ÉLIOTT ÉDITIONS and from Amazon France!

This is not yet available in an open-access format, but this should happen eventually. Check back!