I think that is the main purpose for AI at this moment in time; using it to carry out low level tasks so that your time is better spent on the higher value tasks.For simple tasks it saves a lot of time.
Upvote
0
By clicking “Accept All”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts
These cookies enable our website and App to remember things such as your region or country, language, accessibility options and your preferences and settings.
Analytic cookies help website owners to understand how visitors interact with websites by collecting and reporting information anonymously.
Marketing cookies are used to track visitors across websites. The intention is to display ads that are relevant and engaging for the individual user and thereby more valuable for publishers and third party advertisers.
I think that is the main purpose for AI at this moment in time; using it to carry out low level tasks so that your time is better spent on the higher value tasks.For simple tasks it saves a lot of time.
What sort of complex tasks you tried to do so far? Ive set mine up as a bit of an org chart so for complex tasks that are specialist it can be dedicated to a specific agentOpenClaw is great, I'm running it in a virtual machine and using Codex from ChatGPT Plus.
For simple tasks it saves a lot of time.
It doesn't seem to be able to complete more complex ones yet.
Paul.
Developing a very very simple CMS that displays the content of .md files as a simple website. It kind of worked but had to be prompted again and again.What sort of complex tasks you tried to do so far? Ive set mine up as a bit of an org chart so for complex tasks that are specialist it can be dedicated to a specific agent
Indeed. But in most cases using the free versions.I think that is the main purpose for AI at this moment in time; using it to carry out low level tasks so that your time is better spent on the higher value tasks.
I never use the free versions; I just don't want my data used for training and instead feel more comfortable training my own GPT on my data ringfenced for my use.But in most cases using the free versions.
Yes biggest thing is to never use the free version. You are the product with the free version and always check how they use the data from even the paid versions.I never use the free versions; I just don't want my data used for training and instead feel more comfortable training my own GPT on my data ringfenced for my use.
Most use the free in business as they know no better with the way most have subscriptions 1 subscription for the paid variant would make little impact on what they are spending. Free for a business presents the greatest risk. So making sure they are aware of that risk is key. And an alternative? Most alternatives are 10 times the cost to outsource whatever they are trying to do.Whilst I agree that’s not how the majority use AI tools. They use it because it is free. If they had to pay they would look for an alternative.
There has been alot of noise around AI and Legal use cases. There was a US class action lawsuit for a couple of billion where they used Chatgpt to collate all of their citations but it added in fabricated ones and it got thrown out of court and i believe a fine for the law firm.There was an interesting post (on a different thread) recently. The op was looking for employment law advice to deal with a very specific situation. I posted advice based on my knowledge and experience. someone else posted advice that was different to mine and, fortunately, included some details. It was clearly AI generated and at least 7 tears out of date.
I am very aware of a lot of legal claims where AI has generated completely false cases to support the legal arguments.
In my field I would have used AI to check my advice forcurrent legislation. Given those examples, why would I even consider it?
There have been a lot of false case citations in the UK as well, and if I need to check that each one exists and says what AI tells me, there is very little point in that side of things. For the rest, I think the environmenal cost outweighs the minor admin assistance it can give. I am happy for others to disagree.There has been alot of noise around AI and Legal use cases. There was a US class action lawsuit for a couple of billion where they used Chatgpt to collate all of their citations but it added in fabricated ones and it got thrown out of court and i believe a fine for the law firm.
If used in the right way is can make the time taken for various tasks shrink. But it has to be setup in the right way. It can draft legal docs and look for current advice but it needs to be told exactly what it should search for and based on current regs etc. It can do that it just has to be instructed to do it. I think the best way i can describe it is like a springer spaniel. They are great at the job that they do but they are bat shit crazy get distracted by a scent or sight of something so unless you instruct it properly it wont do that job it was trained to do.
But other areas of legal services and professional services we have helped clients with is around the more mundane parts of the role. Taking notes, drafting emails from client calls, and the admin heavy monotony so law firms can have their fee earners focused on face to face client work.
Depends on how you highlight what citations you have used and you can add rules to ensure it verifies citations too so its all in how you build it and how you can audit how it got to the answer.There have been a lot of false case citations in the UK as well, and if I need to check that each one exists and says what AI tells me, there is very little point in that side of things. For the rest, I think the environmenal cost outweighs the minor admin assistance it can give. I am happy for others to disagree.
There was an interesting post (on a different thread) recently. The op was looking for employment law advice to deal with a very specific situation. I posted advice based on my knowledge and experience. someone else posted advice that was different to mine and, fortunately, included some details. It was clearly AI generated and at least 7 tears out of date.
I am very aware of a lot of legal claims where AI has generated completely false cases to support the legal arguments.
In my field I would have used AI to check my advice forcurrent legislation. Given those examples, why would I even consider it?
That is true AI is like a puppy trying to always please you. Cant remember the studySimilarly, if you look at the legal forum here, you will see cases of misguided defendants relying in wholly inaccurate or misleading AI to support themselves
The big problem being that AI is great at platitudes
I am sorry but I haven't the faintest idea what your second paragraph meansThat is true AI is like a puppy trying to always please you. Cant remember the study
I feel like i am being harsh by saying this but alot of the issues people face i feel like are PICNICs its just chat models help people to do that without any feedback and not fully understanding the tools they are using as the marketing peeps have done really well lol
PICNIC - Person In Chair Not in ComputerI am sorry but I haven't the faintest idea what your second paragraph means
Nope, no further forward.PICNIC - Person In Chair Not in Computer
So issue is person in chair not in computer really never heard that one? There are a couple of them but can't remember them off the top of my headNope, no further forward.
I have an IT background and many years in the industry under the belt and I don't know what that means.So issue is person in chair not in computer really never heard that one? There are a couple of them but can't remember them off the top of my head![]()
Damn I thought it was pretty universal in UK terms lol. Can also be problem in chair not in computer there were a few more some of the grizzled support techs I used to work with knew but forgotten them allI have an IT background and many years in the industry under the belt and I don't know what that means.
So, since you were implicitely asked twice, but failed to provide an explanation for the acronym, I looked it up:Damn I thought it was pretty universal in UK terms lol. Can also be problem in chair not in computer there were a few more some of the grizzled support techs I used to work with knew but forgotten them all![]()
So it was tongue in cheekSo, since you were implicitely asked twice, but failed to provide an explanation for the acronym, I looked it up:
Definition 1 (from Cyberdefinitions)
In IT, PICNIC is an acronym used with the meaning "Problem In Chair Not In Computer." It is used by IT technicians when it is apparent that a user is not having a problem with a device because of an issue with the hardware or software, but because they are incompetent.
Definition 2 (from Wikitionary)
(humorous) Acronym of problem in chair, not in computer: states that the problem was not in the computer but was instead caused by the user operating it.
In other words, you seem to be implying that AI failures are the user's fault because the user is incompetent.
Seeing that you work inside the AI supply-chain, such a claim coming from you is not surprising. But the claim is not supported by facts.
I myself have found that the most popular models spew out inaccurate information fairly regularly.
Don't blame the users when the models are designed to accept any input and when they (the models) reply to such input in an extremely confident but totally incorrect way.
Never heard of it, although ctrlbrk's version is at least intelligible. But my problem is with the whole paragraph. Possibly punctuation may help, but I don't understand what you are trying to say.So issue is person in chair not in computer really never heard that one? There are a couple of them but can't remember them off the top of my head![]()
That wasnt what i was saying in relation to CRMs. Ive seen any number of messes with CRM systems due to the way they were setup. I would apply the same rules to working with AI. Yes AI guesses based on the vast knowledge it has but it still requires verification. Like with analytics etc its as good as the inputs you give it and with it currently at its development level it still needs to have the human in the loop to check.@Data Swami - your analogy is wrong.
A CRM doesn't make things up. You input data which is validated and verified and you get some sort output based on that data. If the data is wrong the CRM doesn't make up some outputs based on what it thinks you really mean.
AI guesses what you might be trying to do and give you an answer that may or may not be correct. It has nothing to do with a picnic or experience. AI just makes things up.
So sources most definitely can be specified in what information its pooling if its searching the internet but i suspect it wont cite sources of all the information it was trained on at the moment unless legislation gets put in place as OpenAI and pretty much all the others were very naughty in grabbing the whole internet to train the models on.So it seems that picnic is a modern version of what we called GIGO.
Except that AI isnt just relying on data input, it's interpreting and scraping information from a range of sources which (to the best of my knowledge) can't be specified or identified?
My wife uses HEIDI (as used by NHS) for patient notes - it undoubtedly saves a lot of time and repetition but requires close management to prevent it from spouting nonsense - potentially dangerously so. My favourite, silly example is when it fused casual chat with medical, and blamed the tomato growth in warm weather with a client's shoulder pain.
So sources most definitely can be specified in what information its pooling if its searching the internet but i suspect it wont cite sources of all the information it was trained on at the moment unless legislation gets put in place as OpenAI and pretty much all the others were very naughty in grabbing the whole internet to train the models on.
Sounds like Heidis guardrails need a bit of work XD. Would be interesting to test it out for prompt injection too as I know plenty of chatbots that still dont have that protection on so you can force them to do whatever you want. Like the case of Fords Website chatbot that enabled someone to get it to offer them a truck for $1
@Data Swami - you need to bring your posts down the level of us thickies. Assume we no nothing other than the ability of enter a prompt into ChatGPT or whatever. That's about all most of us can manage.
And whilst a CRM can be badly configured there are no configuration options when entering a prompt to an AI tool. You ask it a question and it provides a plausible but unverified answer. A CRM, no matter how badly configured, doesn't do that.
This a business forum, not a "what friends use a consumer version of a llm chat program for" Anthropic has some great tools, most of these "glitches" are the ai answering a poor prompt correctly, ai doesnt like ambiguity, specific language, intent and outcome are important too. IFor a specific use I agree. But most people I know use it for fun. Making pictures and animations for example. They aren’t ever going to pay.
Many businesses are struggling to see any real increase in productivity and are now questioning the monthly cost.
As an aside, I wanted an update to an existing plugin and tried Claude. Took over an hour to get a prompt that worked but because the plugin uses a custom API Claude just gave up.
It depends on what you are doing, trying to do and how you are composing your prompts and chat structure.Much of AI usage still is a waste of time.
Many organizations have discovered replacing people with agentic AI has come back to bite them.
And the huge resources thrown at AI its unlikely to ever recoup the costs. Then there is are environmental costs - a recent report suggests data centre additional power demand will exceed the total energy production in the UK. And the water consumption will be greater than the replenishment rates.
Yes AI will get better. But the cost in employment and resources may make it unsustainable.
When I compare how I used to use ChatGPT when I was first shown it over a year ago, to how I use it and Claude today, they are light years apart.It depends on what you are doing, trying to do and how you are composing your prompts and chat structure.
How did you make the change between how you started to use ChatGPT and now how you have started to create the more complex stuff you have now?When I compare how I used to use ChatGPT when I was first shown it over a year ago, to how I use it and Claude today, they are light years apart.
In the past I used to enter a short and simple prompt very similar to how I'd speak to a work colleague who has been working with him for years. Someone who knows my business, knows what I'm trying to do, and understands my way of working.
These days I have spent several days building customer GPT's (my own bespoke AI) that has been trained on sales brochures, legal documents, data structures, and it's told not to go to external data for any information and focus only on the briefing documents I've provided.
My prompts are structured in a sort of JSON structure, with an opening context and objective, and a measure of success and failure. I've spent days training the GPT's (AI's) on what their purpose and objectives, and the type of outputs I'm looking for.
The end results is that I find myself with more free time. Research and tasks that used to take me days to do manually now take me a few minutes.
To be fair Social media in general is "making people dumber" even before chatgpt the old "google it" statements still bamboozle so many people. Just look at the Tiktoks of "what is this or why is this". Id say its more school/whatever else influences don't teach analytical or evaluation skills very well.New Study Shows What AI Is Really Doing To Your Brain
AI was meant to make our jobs easier, to make them more efficient. However, according to research from Harvard Business Review, workers tasked with overseeing different AI agents as part of their daily workflow said it didn't simplify the work. Instead, it intensified it. And, the authors note that instead of helping, the use of multiple AIs in the workflow could even lead to mental fatigue, thus directly affecting the brain. This isn't the first time that we have seen reports about how AI can affect the mind. Previously, a study from MIT showed that critical thinking skills were atrophying thanks to an over reliance on AI. Further, we've seen a slew of other studies that have pointed to the same concerns: ChatGPT is making people dumber.
From BGR.
dont forget video games are making us more violent too XDIndeed, but I don't think it's just AI. It's systemic of so many tools, for example SatNav has significantly effected the human ability of 'sense of direction' and navigating maps and finding their way around. People now use SatNav for the simplest of routes avoiding even just thinking for a moment.
Then we have Google and social media, when we used to use libraries and had to research through books, crippling our ability to research objectively.
I recall my earlier job that used involve a lot of travel, and whenever I visited a new town the first thing I did was pop into the first garage I saw and purchased a local map so I could find the place I needed to visit (as there was no Internet or SatNav back then). The ability to do that and find your way around a place without a computer directing you is lost these days.
AI is just another evolution of the human reliance on technology; it's 'progress' @ctrlbrk![]()