AI does have its uses

Ozzy

Founder of UKBF
UKBF Staff
  • Feb 9, 2003
    8,319
    11
    3,437
    Northampton, UK
    bdgroup.co.uk
    For simple tasks it saves a lot of time.
    I think that is the main purpose for AI at this moment in time; using it to carry out low level tasks so that your time is better spent on the higher value tasks.
     
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    OpenClaw is great, I'm running it in a virtual machine and using Codex from ChatGPT Plus.

    For simple tasks it saves a lot of time.

    It doesn't seem to be able to complete more complex ones yet.

    Paul.
    What sort of complex tasks you tried to do so far? Ive set mine up as a bit of an org chart so for complex tasks that are specialist it can be dedicated to a specific agent
     
    Upvote 0

    antropy

    Business Member
  • Business Listing
    Aug 2, 2010
    5,313
    1,098
    West Sussex, UK
    www.antropy.co.uk
    What sort of complex tasks you tried to do so far? Ive set mine up as a bit of an org chart so for complex tasks that are specialist it can be dedicated to a specific agent
    Developing a very very simple CMS that displays the content of .md files as a simple website. It kind of worked but had to be prompted again and again.

    Paul.
     
    Upvote 0

    fisicx

    Moderator
    Sep 12, 2006
    46,656
    8
    15,356
    Aldershot
    www.aerin.co.uk
    I think that is the main purpose for AI at this moment in time; using it to carry out low level tasks so that your time is better spent on the higher value tasks.
    Indeed. But in most cases using the free versions.

    I see today the E7 level MS Office thing with co-pilot is £99/person/month.
     
    Upvote 0

    Ozzy

    Founder of UKBF
    UKBF Staff
  • Feb 9, 2003
    8,319
    11
    3,437
    Northampton, UK
    bdgroup.co.uk
    But in most cases using the free versions.
    I never use the free versions; I just don't want my data used for training and instead feel more comfortable training my own GPT on my data ringfenced for my use.
     
    • Like
    Reactions: Data Swami
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    I never use the free versions; I just don't want my data used for training and instead feel more comfortable training my own GPT on my data ringfenced for my use.
    Yes biggest thing is to never use the free version. You are the product with the free version and always check how they use the data from even the paid versions.
     
    Upvote 0

    fisicx

    Moderator
    Sep 12, 2006
    46,656
    8
    15,356
    Aldershot
    www.aerin.co.uk
    Whilst I agree that’s not how the majority use AI tools. They use it because it is free. If they had to pay they would look for an alternative.
     
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    Whilst I agree that’s not how the majority use AI tools. They use it because it is free. If they had to pay they would look for an alternative.
    Most use the free in business as they know no better with the way most have subscriptions 1 subscription for the paid variant would make little impact on what they are spending. Free for a business presents the greatest risk. So making sure they are aware of that risk is key. And an alternative? Most alternatives are 10 times the cost to outsource whatever they are trying to do.
     
    Upvote 0

    fisicx

    Moderator
    Sep 12, 2006
    46,656
    8
    15,356
    Aldershot
    www.aerin.co.uk
    Again I agree with you but a plumber using AI to help write quotes or do a bit of cash flow isn’t going to pay.

    Today I dropped of a 40 year old gearbox to a workshop full of old school engineers. They use cardboard labels to write down my details and tie it to the clutch housing. Even the idea of using AI to manage the reconditioning process would be met with blank looks. Maybe if you stopped by and showed how spending £100 would save them £200 they might be interested but they aren’t going to go looking for change.

    This is the problem you will always be up against. Not change itself but the fact many businesses aren’t looking for change.

    Not everyone is chasing the money.
     
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    well ye thats like with anything. If they arent aware of the risk they arent gonna know any different. But for me ill always let them know of the risks as no matter where i look for the regs the rules are getting stricter for business and it could lead to some big fines. SMEs usually dont have to worry as GDPR has been pretty useless in SMEs XD. But on with free versions of AI the way of finding improperly used data is alot easier as its there for everyone to search.

    And for different ideal clients ye some businesses arent going to be worth targetting unless we go out and build a specific platform for them which has been done with some people focused only on advertising and all the ins and outs of that for plumbers etc. Sure not everyone is chasing money but im sure majority of them arent going to want to be fighting to stay afloat everyday or be able to get that bit of freedom when they choose to hang up their boots.
     
    Upvote 0

    Newchodge

    Moderator
  • Business Listing
    Nov 8, 2012
    22,631
    8
    7,946
    Newcastle
    There was an interesting post (on a different thread) recently. The op was looking for employment law advice to deal with a very specific situation. I posted advice based on my knowledge and experience. someone else posted advice that was different to mine and, fortunately, included some details. It was clearly AI generated and at least 7 tears out of date.

    I am very aware of a lot of legal claims where AI has generated completely false cases to support the legal arguments.

    In my field I would have used AI to check my advice forcurrent legislation. Given those examples, why would I even consider it?
     
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    There was an interesting post (on a different thread) recently. The op was looking for employment law advice to deal with a very specific situation. I posted advice based on my knowledge and experience. someone else posted advice that was different to mine and, fortunately, included some details. It was clearly AI generated and at least 7 tears out of date.

    I am very aware of a lot of legal claims where AI has generated completely false cases to support the legal arguments.

    In my field I would have used AI to check my advice forcurrent legislation. Given those examples, why would I even consider it?
    There has been alot of noise around AI and Legal use cases. There was a US class action lawsuit for a couple of billion where they used Chatgpt to collate all of their citations but it added in fabricated ones and it got thrown out of court and i believe a fine for the law firm.

    If used in the right way is can make the time taken for various tasks shrink. But it has to be setup in the right way. It can draft legal docs and look for current advice but it needs to be told exactly what it should search for and based on current regs etc. It can do that it just has to be instructed to do it. I think the best way i can describe it is like a springer spaniel. They are great at the job that they do but they are bat shit crazy get distracted by a scent or sight of something so unless you instruct it properly it wont do that job it was trained to do.

    But other areas of legal services and professional services we have helped clients with is around the more mundane parts of the role. Taking notes, drafting emails from client calls, and the admin heavy monotony so law firms can have their fee earners focused on face to face client work.
     
    • Like
    Reactions: Ozzy
    Upvote 0

    Newchodge

    Moderator
  • Business Listing
    Nov 8, 2012
    22,631
    8
    7,946
    Newcastle
    There has been alot of noise around AI and Legal use cases. There was a US class action lawsuit for a couple of billion where they used Chatgpt to collate all of their citations but it added in fabricated ones and it got thrown out of court and i believe a fine for the law firm.

    If used in the right way is can make the time taken for various tasks shrink. But it has to be setup in the right way. It can draft legal docs and look for current advice but it needs to be told exactly what it should search for and based on current regs etc. It can do that it just has to be instructed to do it. I think the best way i can describe it is like a springer spaniel. They are great at the job that they do but they are bat shit crazy get distracted by a scent or sight of something so unless you instruct it properly it wont do that job it was trained to do.

    But other areas of legal services and professional services we have helped clients with is around the more mundane parts of the role. Taking notes, drafting emails from client calls, and the admin heavy monotony so law firms can have their fee earners focused on face to face client work.
    There have been a lot of false case citations in the UK as well, and if I need to check that each one exists and says what AI tells me, there is very little point in that side of things. For the rest, I think the environmenal cost outweighs the minor admin assistance it can give. I am happy for others to disagree.
     
    • Like
    Reactions: ctrlbrk and fisicx
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    There have been a lot of false case citations in the UK as well, and if I need to check that each one exists and says what AI tells me, there is very little point in that side of things. For the rest, I think the environmenal cost outweighs the minor admin assistance it can give. I am happy for others to disagree.
    Depends on how you highlight what citations you have used and you can add rules to ensure it verifies citations too so its all in how you build it and how you can audit how it got to the answer.

    And in terms of minor admin assistance see how long each step takes and add that up over the month and year when weve assessed clients its saved their fee earners over half their time so they can focus on face to face and get out of the admin faff. And alot of it isnt AI its just automation so no AI environmental cost
     
    Upvote 0

    fisicx

    Moderator
    Sep 12, 2006
    46,656
    8
    15,356
    Aldershot
    www.aerin.co.uk
    But….

    If it’s just automation of routine tasks you may not need AI. You might even just need to refine your processes to save huge chunks of time.

    I worked for a company that spent days each month creating a set of analytical documents. We asked the recipients which bit of data they actually needed and we were able to bin almost all the documents and just sent out an email with the few bit of data people wanted.

    As was mentioned in another thread: do the business analysis first to discover where savings can be made. And then decide if AI can help.
     
    Upvote 0
    There was an interesting post (on a different thread) recently. The op was looking for employment law advice to deal with a very specific situation. I posted advice based on my knowledge and experience. someone else posted advice that was different to mine and, fortunately, included some details. It was clearly AI generated and at least 7 tears out of date.

    I am very aware of a lot of legal claims where AI has generated completely false cases to support the legal arguments.

    In my field I would have used AI to check my advice forcurrent legislation. Given those examples, why would I even consider it?

    Similarly, if you look at the legal forum here, you will see cases of misguided defendants relying in wholly inaccurate or misleading AI to support themselves

    The big problem being that AI is great at platitudes
     
    • Like
    Reactions: ctrlbrk
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    Similarly, if you look at the legal forum here, you will see cases of misguided defendants relying in wholly inaccurate or misleading AI to support themselves

    The big problem being that AI is great at platitudes
    That is true AI is like a puppy trying to always please you. Cant remember the study

    I feel like i am being harsh by saying this but alot of the issues people face i feel like are PICNICs its just chat models help people to do that without any feedback and not fully understanding the tools they are using as the marketing peeps have done really well lol
     
    Upvote 0

    Newchodge

    Moderator
  • Business Listing
    Nov 8, 2012
    22,631
    8
    7,946
    Newcastle
    That is true AI is like a puppy trying to always please you. Cant remember the study

    I feel like i am being harsh by saying this but alot of the issues people face i feel like are PICNICs its just chat models help people to do that without any feedback and not fully understanding the tools they are using as the marketing peeps have done really well lol
    I am sorry but I haven't the faintest idea what your second paragraph means
     
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    Nope, no further forward.
    So issue is person in chair not in computer really never heard that one? There are a couple of them but can't remember them off the top of my head 🤣
     
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    I have an IT background and many years in the industry under the belt and I don't know what that means.
    Damn I thought it was pretty universal in UK terms lol. Can also be problem in chair not in computer there were a few more some of the grizzled support techs I used to work with knew but forgotten them all 🤣
     
    Upvote 0

    ctrlbrk

    Free Member
    May 13, 2021
    989
    391
    Damn I thought it was pretty universal in UK terms lol. Can also be problem in chair not in computer there were a few more some of the grizzled support techs I used to work with knew but forgotten them all 🤣
    So, since you were implicitely asked twice, but failed to provide an explanation for the acronym, I looked it up:

    Definition 1 (from Cyberdefinitions)
    In IT, PICNIC is an acronym used with the meaning "Problem In Chair Not In Computer." It is used by IT technicians when it is apparent that a user is not having a problem with a device because of an issue with the hardware or software, but because they are incompetent.

    Definition 2 (from Wikitionary)
    (humorous) Acronym of problem in chair, not in computer: states that the problem was not in the computer but was instead caused by the user operating it.


    In other words, you seem to be implying that AI failures are the user's fault because the user is incompetent.

    Seeing that you work inside the AI supply-chain, such a claim coming from you is not surprising. But the claim is not supported by facts.

    I myself have found that the most popular models spew out inaccurate information fairly regularly.

    Don't blame the users when the models are designed to accept any input and when they (the models) reply to such input in an extremely confident but totally incorrect way.
     
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    So, since you were implicitely asked twice, but failed to provide an explanation for the acronym, I looked it up:

    Definition 1 (from Cyberdefinitions)
    In IT, PICNIC is an acronym used with the meaning "Problem In Chair Not In Computer." It is used by IT technicians when it is apparent that a user is not having a problem with a device because of an issue with the hardware or software, but because they are incompetent.

    Definition 2 (from Wikitionary)
    (humorous) Acronym of problem in chair, not in computer: states that the problem was not in the computer but was instead caused by the user operating it.


    In other words, you seem to be implying that AI failures are the user's fault because the user is incompetent.

    Seeing that you work inside the AI supply-chain, such a claim coming from you is not surprising. But the claim is not supported by facts.

    I myself have found that the most popular models spew out inaccurate information fairly regularly.

    Don't blame the users when the models are designed to accept any input and when they (the models) reply to such input in an extremely confident but totally incorrect way.
    So it was tongue in cheek

    Some of the examples are user error/lack of knowledge of use, as i stated earlier you need to ensure it has guard rails as currently AI is very simple. The marketing around some of the chat models do imply it can do all manner of things and it can but as i said needs to prompted in specific ways and informed on exactly what you want it to do even a step by step methodology and even then different models require.

    Practically any piece of software can be an absolute mess if used wrong and what we are seeing with AI is alot of inexperienced people using it confidently and in doing so not have the right outputs they need. Just like any CRM system or data tool if you dont have the experience or know the right people attempting to find the way to get the best outcomes is going to be difficult and that applies to using Language models
     
    Upvote 0

    fisicx

    Moderator
    Sep 12, 2006
    46,656
    8
    15,356
    Aldershot
    www.aerin.co.uk
    @Data Swami - your analogy is wrong.

    A CRM doesn't make things up. You input data which is validated and verified and you get some sort output based on that data. If the data is wrong the CRM doesn't make up some outputs based on what it thinks you really mean.

    AI guesses what you might be trying to do and give you an answer that may or may not be correct. It has nothing to do with a picnic or experience. AI just makes things up.
     
    • Like
    Reactions: ctrlbrk
    Upvote 0

    Newchodge

    Moderator
  • Business Listing
    Nov 8, 2012
    22,631
    8
    7,946
    Newcastle
    So issue is person in chair not in computer really never heard that one? There are a couple of them but can't remember them off the top of my head 🤣
    Never heard of it, although ctrlbrk's version is at least intelligible. But my problem is with the whole paragraph. Possibly punctuation may help, but I don't understand what you are trying to say.
     
    Upvote 0
    So it seems that picnic is a modern version of what we called GIGO.

    Except that AI isnt just relying on data input, it's interpreting and scraping information from a range of sources which (to the best of my knowledge) can't be specified or identified?

    My wife uses HEIDI (as used by NHS) for patient notes - it undoubtedly saves a lot of time and repetition but requires close management to prevent it from spouting nonsense - potentially dangerously so. My favourite, silly example is when it fused casual chat with medical, and blamed the tomato growth in warm weather with a client's shoulder pain.
     
    • Like
    Reactions: fisicx
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    @Data Swami - your analogy is wrong.

    A CRM doesn't make things up. You input data which is validated and verified and you get some sort output based on that data. If the data is wrong the CRM doesn't make up some outputs based on what it thinks you really mean.

    AI guesses what you might be trying to do and give you an answer that may or may not be correct. It has nothing to do with a picnic or experience. AI just makes things up.
    That wasnt what i was saying in relation to CRMs. Ive seen any number of messes with CRM systems due to the way they were setup. I would apply the same rules to working with AI. Yes AI guesses based on the vast knowledge it has but it still requires verification. Like with analytics etc its as good as the inputs you give it and with it currently at its development level it still needs to have the human in the loop to check.

    Yes AI guesses because if we arent clear with it then it needs to depending on model it might ask clarifying questions but other smaller models wont and try and give an answer. We havent got to the point of general intelligence yet and thats still a ways away. But this narrow AI still has its uses just like any algorithm or data flow it just needs to knowledge and experience of how to use them well and have the guardrails in place.
     
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    So it seems that picnic is a modern version of what we called GIGO.

    Except that AI isnt just relying on data input, it's interpreting and scraping information from a range of sources which (to the best of my knowledge) can't be specified or identified?

    My wife uses HEIDI (as used by NHS) for patient notes - it undoubtedly saves a lot of time and repetition but requires close management to prevent it from spouting nonsense - potentially dangerously so. My favourite, silly example is when it fused casual chat with medical, and blamed the tomato growth in warm weather with a client's shoulder pain.
    So sources most definitely can be specified in what information its pooling if its searching the internet but i suspect it wont cite sources of all the information it was trained on at the moment unless legislation gets put in place as OpenAI and pretty much all the others were very naughty in grabbing the whole internet to train the models on.

    Sounds like Heidis guardrails need a bit of work XD. Would be interesting to test it out for prompt injection too as I know plenty of chatbots that still dont have that protection on so you can force them to do whatever you want. Like the case of Fords Website chatbot that enabled someone to get it to offer them a truck for $1
     
    Upvote 0
    So sources most definitely can be specified in what information its pooling if its searching the internet but i suspect it wont cite sources of all the information it was trained on at the moment unless legislation gets put in place as OpenAI and pretty much all the others were very naughty in grabbing the whole internet to train the models on.

    Sounds like Heidis guardrails need a bit of work XD. Would be interesting to test it out for prompt injection too as I know plenty of chatbots that still dont have that protection on so you can force them to do whatever you want. Like the case of Fords Website chatbot that enabled someone to get it to offer them a truck for $1

    I'll be honest here - i didn't really understand any of that. Not that it matters, but it does raise the point which always exists in tech, which is the chasm in terms and comprehension between users and techies
     
    Upvote 0

    fisicx

    Moderator
    Sep 12, 2006
    46,656
    8
    15,356
    Aldershot
    www.aerin.co.uk
    @Data Swami - you need to bring your posts down the level of us thickies. Assume we no nothing other than the ability of enter a prompt into ChatGPT or whatever. That's about all most of us can manage.

    And whilst a CRM can be badly configured there are no configuration options when entering a prompt to an AI tool. You ask it a question and it provides a plausible but unverified answer. A CRM, no matter how badly configured, doesn't do that.
     
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    @Data Swami - you need to bring your posts down the level of us thickies. Assume we no nothing other than the ability of enter a prompt into ChatGPT or whatever. That's about all most of us can manage.

    And whilst a CRM can be badly configured there are no configuration options when entering a prompt to an AI tool. You ask it a question and it provides a plausible but unverified answer. A CRM, no matter how badly configured, doesn't do that.

    Ill show you some of the CRM systems ive had to fix 😂 you can definitely get a plausible unverified answer from some horribly setup ones 😂. Sending emails to the completely wrong ex customers even deceased... Pulling the wrong information for customers and then putting that data into tools to see how well the business was doing giving wildly wrong numbers.

    Alot of the issues that have been highlighted in relation to AI use here is about how you prompt the AI just like with how you found it useful and not useful. Its about being very specific and structured in how you write your prompt and even if its a long bit of work to do making sure its structured in a bit of a plan and making sure you can give it as much knowledge as possible and making it search only the places you want it to search. But then also making sure whichever tool you are using you are using the "right" model Pro/fast depending what provider you are using.

    And to be fair chatgpt chat is just the tip of whats possible. Ive shown Ozzy some of the things im developing and its getting wilder and wilder with what is possible and how to make sure I have the human checking steps that dont become a job in themselves

    So yes by being a bit tongue in cheek with the picnic its more of a user knowledge and experience issue than error but we've gotta have a bit of fun sometimes 😂.
     
    Upvote 0

    MarkOnline

    Free Member
    Apr 25, 2020
    609
    239
    For a specific use I agree. But most people I know use it for fun. Making pictures and animations for example. They aren’t ever going to pay.

    Many businesses are struggling to see any real increase in productivity and are now questioning the monthly cost.

    As an aside, I wanted an update to an existing plugin and tried Claude. Took over an hour to get a prompt that worked but because the plugin uses a custom API Claude just gave up.
    This a business forum, not a "what friends use a consumer version of a llm chat program for" Anthropic has some great tools, most of these "glitches" are the ai answering a poor prompt correctly, ai doesnt like ambiguity, specific language, intent and outcome are important too. I
    Much of AI usage still is a waste of time.

    Many organizations have discovered replacing people with agentic AI has come back to bite them.

    And the huge resources thrown at AI its unlikely to ever recoup the costs. Then there is are environmental costs - a recent report suggests data centre additional power demand will exceed the total energy production in the UK. And the water consumption will be greater than the replenishment rates.

    Yes AI will get better. But the cost in employment and resources may make it unsustainable.
    It depends on what you are doing, trying to do and how you are composing your prompts and chat structure.
    You have different ideas as to how I think Ai will go, which is fine. I think the printing press will never take off either.
     
    • Like
    Reactions: Data Swami
    Upvote 0

    Ozzy

    Founder of UKBF
    UKBF Staff
  • Feb 9, 2003
    8,319
    11
    3,437
    Northampton, UK
    bdgroup.co.uk
    It depends on what you are doing, trying to do and how you are composing your prompts and chat structure.
    When I compare how I used to use ChatGPT when I was first shown it over a year ago, to how I use it and Claude today, they are light years apart.

    In the past I used to enter a short and simple prompt very similar to how I'd speak to a work colleague who has been working with him for years. Someone who knows my business, knows what I'm trying to do, and understands my way of working.

    These days I have spent several days building customer GPT's (my own bespoke AI) that has been trained on sales brochures, legal documents, data structures, and it's told not to go to external data for any information and focus only on the briefing documents I've provided.
    My prompts are structured in a sort of JSON structure, with an opening context and objective, and a measure of success and failure. I've spent days training the GPT's (AI's) on what their purpose and objectives, and the type of outputs I'm looking for.

    The end results is that I find myself with more free time. Research and tasks that used to take me days to do manually now take me a few minutes.
     
    • Love
    Reactions: Data Swami
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    When I compare how I used to use ChatGPT when I was first shown it over a year ago, to how I use it and Claude today, they are light years apart.

    In the past I used to enter a short and simple prompt very similar to how I'd speak to a work colleague who has been working with him for years. Someone who knows my business, knows what I'm trying to do, and understands my way of working.

    These days I have spent several days building customer GPT's (my own bespoke AI) that has been trained on sales brochures, legal documents, data structures, and it's told not to go to external data for any information and focus only on the briefing documents I've provided.
    My prompts are structured in a sort of JSON structure, with an opening context and objective, and a measure of success and failure. I've spent days training the GPT's (AI's) on what their purpose and objectives, and the type of outputs I'm looking for.

    The end results is that I find myself with more free time. Research and tasks that used to take me days to do manually now take me a few minutes.
    How did you make the change between how you started to use ChatGPT and now how you have started to create the more complex stuff you have now?
     
    Upvote 0

    ctrlbrk

    Free Member
    May 13, 2021
    989
    391

    New Study Shows What AI Is Really Doing To Your Brain​

    AI was meant to make our jobs easier, to make them more efficient. However, according to research from Harvard Business Review, workers tasked with overseeing different AI agents as part of their daily workflow said it didn't simplify the work. Instead, it intensified it. And, the authors note that instead of helping, the use of multiple AIs in the workflow could even lead to mental fatigue, thus directly affecting the brain. This isn't the first time that we have seen reports about how AI can affect the mind. Previously, a study from MIT showed that critical thinking skills were atrophying thanks to an over reliance on AI. Further, we've seen a slew of other studies that have pointed to the same concerns: ChatGPT is making people dumber.

    From BGR.
     
    Upvote 0

    Ozzy

    Founder of UKBF
    UKBF Staff
  • Feb 9, 2003
    8,319
    11
    3,437
    Northampton, UK
    bdgroup.co.uk
    Indeed, but I don't think it's just AI. It's systemic of so many tools, for example SatNav has significantly effected the human ability of 'sense of direction' and navigating maps and finding their way around. People now use SatNav for the simplest of routes avoiding even just thinking for a moment.
    Then we have Google and social media, when we used to use libraries and had to research through books, crippling our ability to research objectively.

    I recall my earlier job that used involve a lot of travel, and whenever I visited a new town the first thing I did was pop into the first garage I saw and purchased a local map so I could find the place I needed to visit (as there was no Internet or SatNav back then). The ability to do that and find your way around a place without a computer directing you is lost these days.

    AI is just another evolution of the human reliance on technology; it's 'progress' @ctrlbrk ;)
     
    Upvote 0

    Data Swami

    Business Member
  • Business Listing

    New Study Shows What AI Is Really Doing To Your Brain​

    AI was meant to make our jobs easier, to make them more efficient. However, according to research from Harvard Business Review, workers tasked with overseeing different AI agents as part of their daily workflow said it didn't simplify the work. Instead, it intensified it. And, the authors note that instead of helping, the use of multiple AIs in the workflow could even lead to mental fatigue, thus directly affecting the brain. This isn't the first time that we have seen reports about how AI can affect the mind. Previously, a study from MIT showed that critical thinking skills were atrophying thanks to an over reliance on AI. Further, we've seen a slew of other studies that have pointed to the same concerns: ChatGPT is making people dumber.

    From BGR.
    To be fair Social media in general is "making people dumber" even before chatgpt the old "google it" statements still bamboozle so many people. Just look at the Tiktoks of "what is this or why is this". Id say its more school/whatever else influences don't teach analytical or evaluation skills very well.

    And reading some of the snippets in that article it seems most of those they have interviewed is where the business has decided to use this tool that tool without any actual insight into what the doers are doing so like what we've been saying before adding a tool without thinking about the processes. I bet most of these implementations are big 4 consultancies too
     
    Upvote 0

    Data Swami

    Business Member
  • Business Listing
    Indeed, but I don't think it's just AI. It's systemic of so many tools, for example SatNav has significantly effected the human ability of 'sense of direction' and navigating maps and finding their way around. People now use SatNav for the simplest of routes avoiding even just thinking for a moment.
    Then we have Google and social media, when we used to use libraries and had to research through books, crippling our ability to research objectively.

    I recall my earlier job that used involve a lot of travel, and whenever I visited a new town the first thing I did was pop into the first garage I saw and purchased a local map so I could find the place I needed to visit (as there was no Internet or SatNav back then). The ability to do that and find your way around a place without a computer directing you is lost these days.

    AI is just another evolution of the human reliance on technology; it's 'progress' @ctrlbrk ;)
    dont forget video games are making us more violent too XD
     
    • Like
    Reactions: Ozzy
    Upvote 0

    Latest Articles