I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.

They send me documents they “put together” that are clearly ChatGPT generated, with no shame. They tell us that if we aren’t doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.

I feel like I am living in a nightmare.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    I find it annoying, because the hype means that if you’re not building a solution that involves AI in some way, you practically can’t get funding. Many vital projects are being cancelled due to a lack of funding and tons of bullshit projects get spun up, where they just slap AI onto a problem for which the current generation of AI is entirely ill-suited.

    Basically, if you don’t care for building useful stuff, if you’re an opportunistic scammer, then the hype is fucking excellent. If you do care, then prepare for pain.

  • BanaramaClamcrotch@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    Idk my boss sometimes uses chatGPT to generate goofy (and appropriate) memes about the workplace from time to time…. That sort of seems to be about it.

  • Mrkawfee@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 days ago

    We have CoPilot at my corporate job and I use it every day. Summarising email chains, reviewing documents and research. Its a huge time saver.

    Its good, not perfect. It makes mistakes and because of hallucination risks i have to double check sources. I dont see it taking my job as its more like having an assistant whose output you have to sense check. Its made me much more productive.

  • owsei@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    A higher up really likes AI for simple proofs of concept, but at times they get so big it’s unusable. With the people on my team, however, bad code is synonymous to AI usage or stupidity (same thing)

  • Crotaro@beehaw.org
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 days ago

    Disclaimer: I only started working at this company about three weeks ago, so this info may not be as accurate as I currently think it is.

    I work in quality management and recently asked my boss what the current stance on AI is, since he mentioned quite early that he and his colleagues sometimes use ChatGPT and Copilot in conjunction to write up some text for process descriptions or info pages. They use it in research tasks, or, for example, to summarize large documents like government regulations, and they very often use it to rephrase texts when they can’t think of a good way to word something. From his explanation, the company consensus seems to be that everyone has access to Copilot via our computers and if someone has, for example, a Kagi or Gemini or whatever subscription, we are absolutely allowed and encouraged to utilize it to its full potential.

    The only rules seem to be to not blindly trust the AI output ever and to not feed it company sensitive information (and/or our suppliers/customers)

  • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    We had a discussion about AI at work. Our consensus was that it doesn’t matter how you want to do your work. What matters is the result, not the process. Are you writing clean code and on finishing tasks on time? That’s the metric. How you get there is up to you.

    • mavu@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      3 days ago

      While this sounds like a good idea, leaving individual decisions to people, longterm it is quite dumb.

      • if you let an LLM solve your software dev problems, you learn nothing. You don’t get better at handling this problem, you don’t get faster, you don’t get experience in spotting the same problem and having a solution ready.

      • you don’t train junior devs this way, and in 20 years there will be (or would be without the bubble popping) a massive need for skilled software developers. (and other specialists in other fields. Better pray that medical doctors handle their profession differently…)

      • you really enjoy tweaking a prompt, dealing with “lying” LLMs and the occasional deleted harddrive? Is this really what you want to do as a job?

      • (bonus point) Would your company be ok with someone paying a remote worker to do his tasks for a fraction of the salary, and then do nothing? I doubt that. so, apparently it does matter how the work gets done.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        3 days ago

        Old enough to remember how people made these same arguments about writing in anything but assembly, using garbage collection, and so on. Technology moves on, and every time there’s a new way to do things people who invested time into doing things the old way end up being upset. You’re just doing moral panic here.

        It’s also very clear that you haven’t used these tools yourself, and you’re just making up a straw man workflow that is divorced from reality.

        Meanwhile, your bonus point has nothing to do with technology itself. You’re complaining about how capitalism works.

        • mavu@discuss.tchncs.de
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          Old enough to remember how people made these same arguments about writing in anything but assembly, using garbage collection, and so on. Technology moves on, and every time there’s a new way to do things people who invested time into doing things the old way end up being upset. You’re just doing moral panic here.

          If this is an example of your level of reading comprehension, then i guess it’s no surprise that you find LLMs work well for you. Your answer addresses none of the points i made, and just tries to do the Jedi-mind-trick-handwave, which unfortunately doesn’t work in real life.

            • mavu@discuss.tchncs.de
              link
              fedilink
              arrow-up
              0
              ·
              2 days ago

              that don’t exist in the real world.

              A bit like your ability to reason and provide arguments. But i guess that happens when you have used LLMs for too long.

                • mavu@discuss.tchncs.de
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  2 days ago

                  I’m sorry?

                  You have the gall to tell that to me, after the first thing you do is falsely accusing me of using straw man arguments and making things up.

                  And then come here, after providing zero actual counterpoints and tell me I am acting like a child?

                  Incredible.

        • zbyte64@awful.systems
          link
          fedilink
          arrow-up
          0
          ·
          3 days ago

          All the technologies you listed behave deterministically, or at least predictably enough that we generally don’t have to worry about surprises from that abstraction layer. Technology does not just move on, practitioners need to actually find it practical beyond their next project that satisfies the shareholders.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            3 days ago

            Again, you’re discussing tools you haven’t actually used and you clearly have no clue how they work. If you had, then you would realize that agents can work against tests, which act as a contract they fill. I use these tools on daily basis and I have no idea what these surprises you’re talking about are. As a practitioner, I find these things plenty practical.

            • zbyte64@awful.systems
              link
              fedilink
              arrow-up
              0
              ·
              3 days ago

              I’ve literally integrated LLMs into a materials optimizations routine at Apple. It’s dangerous to assume what strangers do and do not know.

              • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
                link
                fedilink
                arrow-up
                0
                ·
                2 days ago

                I’m not assuming anything. Either you have not used these tools seriously, or you’re intentionally lying here. Your description of how these tools work and their capabilities is at odds with reality. It’s dangerous to make shit up when talking to people who are well versed in a subject.

                • zbyte64@awful.systems
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  edit-2
                  2 days ago

                  Your description of the tools was to make an inaccurate comparison. But sure, I am the “dangerous” one for showing how those examples are deterministic while gAI is not. Your responses with personal attacks makes it harder to address your claims and makes me think you are here to convince yourself and not others.

  • djsoren19@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    Everyone in my office hates it, including my director who occasionally ends up going on a rant because Microsoft and Adobe often end up pushing their AI to us when we don’t want it.

    Sometimes I’m very thankful I work for a nonprofit. They’re still p shitty to us employees, but our focus is first and foremost on doing the job right, something AI has no chance at.

  • Brutticus@midwest.social
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    I work in social work; I would say about 60 percent of what I do is paperwork. My agency has told us not to use LLMs, as that would be a massive HIPPA nightmare. That being said, we use “secure” corporate emails. These use Microsoft 365 office suite, which are copilot enabled. These include TLDRs at the top, before you even look at the email, predictive texts… and not much else.

    Would I love a bot who could spit out a Plan based on my notes or specifications? absolutely. Do I trust them not to make shit up. Absolutely not.

    • Apytele@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      3 days ago

      Apparently a hospital in my network is trialing a tool to generate assessment flowsheets based on an audio recording of a nurse talking aloud while doing a head to toe assessment. So if they say, you’ve got a little swelling in your legs it’ll mark down bilateral edema under the peripheral vascular section. You have to review before submitting but it seems nice.

  • Foofighter@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    They just hopped onto the bandwagon pushing for copilot and SharePoint. Just in time as some states are switching to open source.

  • I’m a consultant so I’m doing a lot of different things day to day. We use it to track meetings with the copilot facilitator and meeting recaps and next steps. It is pretty helpful in that regard and often matches the tasks I write for myself during the meeting.

    I also have to support a wide arrange of different systems and I can’t be an expert in all of them so it is helpful for generating short scripts and workflows if it is powershell one day, bash the next, exchange management etc. I do know powershell and bash scripting decently well and the scripts often need to be fixed but it is good at generating templates and starter scripts I flesh out as the need arises. At this point I’ve collected many of the useful ones I need in my repos and reuse them pretty often.

    Lastly one of the companies I consult for uses machine learning to design medical implants and design and test novel materials and designs. That is pretty cool and I don’t think they could do some of the stuff they’re doing without machine learning. While still AI, it isn’t really GPT style generative AI though, not sure if that is what you’re asking.

  • Bunbury@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    I’m in an environment with various level of sensitive data, including very sensitive data. Think GDPR type stuff you really don’t want to accidentally leak.

    One day when we started up our company laptops Copilot just was installed and auto launched on startup. Nobody warned us. No indication about how to use it or not use it. That lasted about 3 months. Eventually they limited some ways of using it, gave a little bit of guidance on not putting the most sensitive data in there. Then they enabled Copilot in most apps that we use to actually process the sensitive data. It’s in everything. We are actively encouraged to learn more about it and use it.

    I overheard a colleague recently trying to let it create a whole PowerPoint presentation. From what I heard the results were less than ideal.

    The scary thing is that I’m in a notoriously risk averse industry. Yet they do this. It’s… a choice.

  • Appoxo@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    The order is:
    Use whatever tool is not malicious and doesnt attack customer data.

    Most use (IMO) way too much AI. The first result (the google AI answer) is trusted and done.
    No research done beyond that.

    I purposefully blocked the AI answer in uBlock. I don’t want any of that.
    Besides that I use it on occassion to look for a word or reword my search query if I don’t find or know what I am looking for.
    Very useful for the “What was the name of X again? It does Y and Z” queries.
    Also for Powershell scripting because it can give me examples on using it.

    But every asnwer is double and tripple checked for accuracy.
    Seen too much news about made up answers.

    At home I usually only use it for bash scripting because I can’t be bothered to learn that.

  • morgan423@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    I use Excel at work, not in a traditional accounting sense, but my company uses it as an interface with one of our systems I frequently work with.

    Rather than tediously search the main Excel sheets that get fed into that system for all of the data fields I have to fill in, I made separate Excel tools that consolidate all of that data, then use macros to put the data into the correct fields on the main sheets for me.

    Occasionally I’ll have to add new functionality to that sheet, so I’ll ask AI to write the macro code that does what I need it to do.

    Saves me from having to learn obscure VBA programming to perform a function that I do during .0001% of my work time, but that’s about the extent of it. For now.

    Of course most of what I do is white collar computer work, so I’m expecting that my current job likely has a two-year-or-less countdown on it before they decide to use AI to replace me.

  • lichtmetzger@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    I work for a small advertising agency as a web developer. I’d say is mixed. The writing team is pissed about AI, because of the SEO-optimized slop garbage that is ruining enjoyable articles on the internet. The video team enjoys it, because it’s really easy to generate good (enough) looking VFX with it. I use it rarely. Mostly for mundane tasks and boilerplate code. I enjoy using my actual brain to solve coding problems.

    Customers don’t have a fucking clue, of course. If we told them that they need AI for some stupid reason, they would probably believe us.

    The boss is letting us decide and not forcing anything upon us. If we believe our work is done better with it, we can go for it, but we don’t have to. Good boss.

      • lichtmetzger@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        3 days ago

        VFX, not SFX. In our company, the team shoots real-life videos and then puts effects on top. The most recent project I saw was a movie for a manufacturer of paper colors. The artists made a big tower in one of their factories explode into a wave of paint, it looked pretty (but it was only a few seconds long).