• Welcome to Autism Forums, a friendly forum to discuss Aspergers Syndrome, Autism, High Functioning Autism and related conditions.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Private Member only forums for more serious discussions that you may wish to not have guests or search engines access to.
    • Your very own blog. Write about anything you like on your own individual blog.

    We hope to see you as a part of our community soon! Please also check us out @ https://www.twitter.com/aspiescentral

I Don’t Care if Generative AI Would Make my Writing More Efficent, I do Everything Manually

Joshua the Writer

Very Nerdy Guy, Any Pronouns
V.I.P Member
I do every single piece of writing manually. I have tried expirementing with making something out of AI, but all the services I have tried were either mediocre or down right rat feces on terms of quality, so much so that I always felt the urge to edit what the AI wrote to make it actually passable as something that would be quality writing with its own, unique style. I don't mind AI-based grammar checkers, since they're non generative, and are instead a support tool. I just don't use them because I can't really fine any that can actually comprehend my style of writing, since those tools are baser on pretty generic styles, such as standard academic styles. They are likely intended to be used to write essays, as well. I often end up grammar checking myself by just simply referencing online dictionaries and the like.

I was once told "Oh, you can speed up your writing tempo when you are stuck on a transition for too long." I honestly find those types of challenges fun creative puzzles. I don't want to use AI just to create my work faster. I want to have fun with my writing, and that includes figuring things out like scene transitions and other smaller details. I often put some placeholder text if I want to keep my wiring place and then go back to figure it out later. I don't need AI for that. It would take away from all the fun.

I would like to create things that have soul.
 
I often put some placeholder text if I want to keep my wiring place and then go back to figure it out later.
I do that as well. But in one of my published books, I didn't get back to a placeholder, and it was overlooked through several edits and actually made it into the published work. And it was in [Square brackets] to make it visually stand out! I suspect more than a few readers scratched their heads over that.
 
I agree on AI not being useful in writing stories. I tried a few scenarios just to see what happened, and it was more like reading something out of an English as a Second Language textbook rather than reading an actual story.
 
Almost everyone uses AI wrong. I think that's why it's not translated into the promises that were initially offered.

AI these days usually refers to LLMs, Large Language Models. I won't get into the technical details of those. But basically you can think of these as ingesting the entire corpus of human language, and being able to spit out the most probable responses.

Another term for "most probable," arguably, is "most mediocre."

If you ask LLMs to write a story on their own, they will spit out the most cliche stuff - by definition. They are capable of nothing else. No amount of machine learning engineering is gonna make them "smarter." They are probabilistic sequence generators. Nothing more.

They have been amazing in helping me write code. But here's the thing. 90% of coding is boilerplate. Only the other 10% can be designated as truly innovative. So, LLMs are 90% correct in coding and 10% completely wrong.

Your average coder will take a LLM response, see that it doesn't compile, and completely give up on it and follow the "AI is a nothingburger" bandwagon rather than using their own critical thinking facilities, seeing the LLM has written 90% of the code, and focusing their efforts on identifying and fixing that remaining 10%.

Similarly, LLMs are mis-used for writing IMO. They should not be used to write large sections of text. Instead - you do that yourself - and when you find yourself struggling over an imprecise sentence or a point you don't quite know how to make - that's when you ask the LLM to help clarify.

That's a lot of words to say "LLMs are a tool, LLMs aren't AI."
 
I think it is sad that so many people are turning to ai and probably never even discovering talents that they have.
 
After making my last post, I figured I would give chatGPT another go, since it's been a while since I did the above. I gave it a very terse outline of one of my own novels, and told it to write a 1000 word story based on the outline.
It was actually kind of spooky. What it produced was So much like the first two chapters of my own story, That I seriously have to wonder if my story wasn't used as part of the educating of the AI
 
It's bad enough when Hollywood has lost much of its creativity and choosing to resort to recycling plots, characters and stories.

What will happen if so many wannabee authors begin to tap the same technological resources which might inadvertently regurgitate the same plots that can be litigated against in a civil court of law?
 
Last edited:
Your average coder will take a LLM response, see that it doesn't compile, and completely give up on it and follow the "AI is a nothingburger" bandwagon rather than using their own critical thinking facilities, seeing the LLM has written 90% of the code, and focusing their efforts on identifying and fixing that remaining 10%.

Similarly, LLMs are mis-used for writing IMO. They should not be used to write large sections of text. Instead - you do that yourself - and when you find yourself struggling over an imprecise sentence or a point you don't quite know how to make - that's when you ask the LLM to help clarify.

I couldn't agree more.

AI is pretty great for getting the ball rolling or pointing out something you didn't see (like a project collaborator), but ultimately you still get to be the boss and make the important decisions and perform overrides. And with that attitude, you (or I, I should say) end up using it pretty sparingly for projects.

Back in the day it was Stack Overflow code that wouldn't run 100% on your machine and you had to learn more and fix it for your own purposes -- some things never change :)
 
But it's such an easy way to produce the same old pre-digested rubbish that anyone else could do by learning to type basic english questions at the chatbot prompt.

It can be useful but anyone who thinks it absolves them of needing to understand what they've received from it, and fact check, and process it's semantics - well, if you can do all that, you can write something more original as a human being, and if you can't process AI output before throwing at others, then you're just a proxy for the AI owners and the materials it's trained on. A case of a really nice bit of french polishing of a piece of cheap plywood of unknown strength?

I once had an argument conversation with someone online about AI, and the rude sod started using ChatGPT to reply to me! How insulting! I was taking to him/her from my heart (whether I was right or wrong in what I said) and they used the source of contention to reply to me, instead of their own real considerations. Made me want to vomit to be frank, and I got a little aggressive with them literarily speaking, went to town on them over doing that, and never heard from them again (not rude or insulting, but brutal about my perception of their behaviour), which ultimately was a little sad and a bit of a failure on my part, but WTH???? Is that just me? Would anyone else find it repulsive to be treated like that? I am an anachronism now?

I don't even like the idea of using it for coding. If you need it for algorithmic help, then you don't understand the problem and can less sure the outputs will always follow what's wanted (regardless of bugs n' stuff), if you need help with applying your own algorithm in code, then you don't understand the coding language and how to apply it correctly. It's best use to my mind is learning new coding principals and languages - a teacher, but not for productive code!

Also, as it produces the 'average' of it's training data, as guided by the input question, it will likewise only produce the most average of code, stifling innovation in place of productivity. Is producing more code that does the same thing the same way really a good way forward? And putting so many coders out of work in doing so, is that going to promote new and better ways to do things in future?

All these LLM's strive to wring out the most possible profit from other people's existing works, and in doing it show's the true meaning of innovation and it being stifled or promoted.
 
Almost everyone uses AI wrong. I think that's why it's not translated into the promises that were initially offered.

AI these days usually refers to LLMs, Large Language Models. I won't get into the technical details of those. But basically you can think of these as ingesting the entire corpus of human language, and being able to spit out the most probable responses.

Another term for "most probable," arguably, is "most mediocre."

If you ask LLMs to write a story on their own, they will spit out the most cliche stuff - by definition. They are capable of nothing else. No amount of machine learning engineering is gonna make them "smarter." They are probabilistic sequence generators. Nothing more.

They have been amazing in helping me write code. But here's the thing. 90% of coding is boilerplate. Only the other 10% can be designated as truly innovative. So, LLMs are 90% correct in coding and 10% completely wrong.

Your average coder will take a LLM response, see that it doesn't compile, and completely give up on it and follow the "AI is a nothingburger" bandwagon rather than using their own critical thinking facilities, seeing the LLM has written 90% of the code, and focusing their efforts on identifying and fixing that remaining 10%.

Similarly, LLMs are mis-used for writing IMO. They should not be used to write large sections of text. Instead - you do that yourself - and when you find yourself struggling over an imprecise sentence or a point you don't quite know how to make - that's when you ask the LLM to help clarify.

That's a lot of words to say "LLMs are a tool, LLMs aren't AI."
you're basically saying real AI doesnt exist yet. Im banking and hoping on this stuff to change humanity man
 
The problem is definition of terms rather than the existence of intelligence in software.
There have been intelligent algorithms for ages, but as jsilver says above, "AI" is mostly now used to refer to the Large Language Models that have recently gained public popularity (and awareness).

Consider this - a slime mold shows intelligent behaviour, yet doesn't have anything we can recognise as a nervous system, never mind cognition. It's not even multi-cellular.
Intelligence is very much a different thing to awareness and understanding.

Plus, the companies producing these things are working hard to control the public opinion of people who have no understanding of how they work, either on a computing level, or just the level of what they do to produce an appearance of understanding.

It has potential for good, and potential for bad (like most technology). Unfortunately the people who control it currently have a very poor track record for doing good generally, and this is a tool even more powerful than previously available and hence more dangerous if used wrongly. Like comparing conventional weapons to nukes.
 
When I'm stuck I try to look for prompts. AI is terrible even at that.
The couple of times I asked for a prompt, the dumb things gave me the resume of a story.

I want things like: Cook, Apartment Building, Winter, Hot, Soundless, Empty Road...

Those are what I call prompts. Not bleeping story summaries!
 
That's because intelligence can be very stupid! 😏
But there'll be sites that have info on how to get what you're after for the various 'bots.
Part of the problem is they produce content that contains apparent meaning, but there's zero understanding of what it's producing, just the probability that in most cases one word will be followed by another specific word based on the texts it's been trained with, but the end product looks compellingly like there is understanding of what you asked it and what it's replying.

There was a famous case of an AI engineer (he may have been working for OpenAI, certainly one of the big AI companies) who became convinced it had become self-aware! Total nonsense, and yet the output was so compelling to him, he overrode all his technical knowledge and common sense with that irrational belief. If I recall he started calling for it to be treated as a sentient being (I think they 'let him go'! 😄)

Also, on a different tack (but fulfilling my hobby of LMM AI dissin') be mindful that it incurs a humungous amount of energy (and associated carbon pollution). Microsoft and Google have apparently increased their energy consumption by about 50% due to the introduction of AI, and they were already prodigious users of electricity.
But they don't like to mention that side of things for some unfathomable reason!
 

New Threads

Top Bottom