78 results found | searching for "chatgpt"

2 3 4 5 6 7 8 Next Last 
  • alexSmith731993
  • chandrasekhar121
  • webkulsoftware
  • Magento 2 ChatGPT Extension helps store owners create smart content quickly. With AI, Magento 2 users can generate product descriptions, blog posts, and FAQs. Admins can automate content writing and save time. Magento 2 AI tools improve SEO by creating relevant text. Visit Here - https://store.webkul.com/magento2-chatgpt-ai-extension.html
  • thebrewnews
  • India Bans ChatGPT and DeepSeek: What It Means for AI Regulation: https://thebrewnews.com/thebrew-news/south-asia/india-bans-chatgpt-deepseek/
  • chandrasekhar121
  • Magento 2 ChatGPT makes your #Magento 2 #store smart and #interactive. This AI-powered chatbot provides a better #customer #experience by instantly answering your #customers' questions. https://store.webkul.com/magento2-chatgpt-ai-extension.html
  • triptivermaa01
  • Perplexity is reportedly looking to fundraise at an $8B valuation AI search engine Perplexity is in fundraising talks and hopes to raise around $500 million at an $8 billion valuation, according to the Wall Street Journal. If a deal happens with those terms, it would more than double Perplexity’s valuation from its $3 billion valuation when it raised from SoftBank over the summer. The WSJ reports that the company currently receives about 15 million queries a day and brings in around $50 million in annualized revenue. Perplexity uses AI to help people search the web in a chatbot-style interface. Some news publishers have accused the company of unauthorized web scraping and plagiarism, and The New York Times has even sent Perplexity a cease-and-desist letter, but CEO Aravind Srinivas said he wants to work with publishers and has “no interest in being anyone’s antagonist here.” These fundraising talks come after OpenAI announced raising a $6.6 billion round at a $157 billion valuation. While products like OpenAI’s ChatGPT have blurred the line between chatbot and search engine, the company is moving more directly into search with SearchGPT. A Perplexity spokesperson declined to comment on the WSJ report. https://techcrunch.com/2024/10/20/perplexity-is-reportedly-looking-to-fundraise-at-an-8b-valuation/
  • triptivermaa01
  • ChatGPT comes to Windows Today, OpenAI announced that it’s begun previewing a dedicated Windows app for ChatGPT, its AI-powered chatbot platform. Currently only available to ChatGPT Plus, Team, Enterprise, and Edu users, the app is an early version, OpenAI says, arriving ahead of a “full experience” later in the year. “With the official ChatGPT desktop app, you can chat about files and photos,” OpenAI writes. “This app brings you the newest model improvements from OpenAI, including access to OpenAI o1-preview, our newest and smartest model.” The ChatGPT app for Windows can run on most Windows 10 machines, but currently has certain limitations compared to other ChatGPT clients. It doesn’t support voice yet, including Advanced Voice Mode, and some integrations with OpenAI’s GPT Store aren’t functional. As with the ChatGPT app for macOS, the ChatGPT app for Windows lets you minimize it to a small “companion” window alongside other apps while you work. You can upload files and photos to it, have it summarize documents, and create images via OpenAI’s DALL-E 3 image generator. https://techcrunch.com/2024/10/17/chatgpt-comes-to-windows/
  • triptivermaa01
  • Sam Altman’s Worldcoin becomes World and shows new iris-scanning Orb to prove your humanity Worldcoin, the Sam Altman co-founded “proof of personhood” project that scans people’s eyeballs, announced on Thursday that it dropped the “coin” from its name and is now just “World.” The startup behind the World project, Tools for Humanity, also unveiled its next generation of iris-scanning “Orbs” and other tools at a live event in San Francisco. Co-founder and CEO of Tools for Humanity, Alex Blania, said the project’s old name “just doesn’t work anymore,” potentially signaling the startup is looking to expand its identity beyond its original currency mission. (Eye-scanning initially was seen as a way to get access to Worldcoins, though the founders say this never happened.) OpenAI’s CEO, Sam Altman, spends a good chunk of his time working on World, Blania told TechCrunch during a press conference, but said the two startups’ missions are independent from each other. However, Blania didn’t rule out that World’s crytocurrency could be incorporated into ChatGPT one day. “Well, he’s a co-founder and he’s been so from the beginning. So, we talk a couple times a week. He’s involved in all the decisions,” Blania told TechCrunch. “Of course, he’s focused on OpenAI,” Blania continued. “How tied is World’s success to OpenAI? I think actually not at all. I think these are two very separate missions, and I think AI is heading where it’s heading, and we think what we built here is very important infrastructure for the world, and that will not change.” https://techcrunch.com/2024/10/17/sam-altmans-worldcoin-becomes-world-and-shows-new-iris-scanning-orb-to-prove-your-humanity/
  • triptivermaa01
  • Boston Dynamics teams with TRI to bring AI smarts to Atlas humanoid robot Boston Dynamics and Toyota Research Institute (TRI) Wednesday revealed plans to bring AI-based robotic intelligence to the electric Atlas humanoid robot. The collaboration will leverage the work that TRI has done around large behavior models (LBMs), which operate along similar lines as the more familiar large language models (LLMs) behind platforms like ChatGPT. Last September, TechCrunch paid a visit to TRI’s Bay Area campus for a closer look at the institute’s work on robot learning. In research revealed at last year’s Disrupt conference, institute head Gill Pratt explained how the lab has been able to get robots to 90% accuracy when performing household tasks like flipping pancakes through overnight training. “In machine learning, up until quite recently there was a tradeoff, where it works, but you need millions of training cases,” Pratt explained at the time. “When you’re doing physical things, you don’t have time for that many, and the machine will break down before you get to 10,000. Now it seems that we need dozens. The reason for the dozens is that we need to have some diversity in the training cases. But in some cases, it’s less.” Boston Dynamics is a good match for TRI on the hardware side. The Spot-maker has done its share on the software and AI front to power its own systems, but the manner of work required to teach robots to perform complex tasks with full autonomy is another beast altogether. https://techcrunch.com/2024/10/16/boston-dynamics-teams-with-tri-to-bring-ai-smarts-to-atlas-humanoid-robot/
  • triptivermaa01
  • Meta’s AI chief says world models are key to ‘human-level AI’ — but it might be 10 years out Are today’s AI models truly remembering, thinking, planning, and reasoning, just like a human brain would? Some AI labs would have you believe they are, but according to Meta’s chief AI scientist Yann LeCun, the answer is no. He thinks we could get there in a decade or so, however, by pursuing a new method called a “world model.” Earlier this year, OpenAI released a new feature it calls “memory” that allows ChatGPT to “remember” your conversations. The startup’s latest generation of models, o1, displays the word “thinking” while generating an output, and OpenAI says the same models are capable of “complex reasoning.” That all sounds like we’re pretty close to AGI. However, during a recent talk at the Hudson Forum, LeCun undercut AI optimists, such as xAI founder Elon Musk and Google DeepMind co-founder Shane Legg, who suggest human-level AI is just around the corner. “We need machines that understand the world; [machines] that can remember things, that have intuition, have common sense, things that can reason and plan to the same level as humans,” said LeCun during the talk. “Despite what you might have heard from some of the most enthusiastic people, current AI systems are not capable of any of this.” LeCun says today’s large language models, like those which power ChatGPT and Meta AI, are far from “human-level AI.” Humanity could be “years to decades” away from achieving such a thing, he later said. (That doesn’t stop his boss, Mark Zuckerberg, from asking him when AGI will happen, though.) The reason why is straightforward: those LLMs work by predicting the next token (usually a few letters or a short word), and today’s image/video models are predicting the next pixel. In other words, language models are one-dimensional predictors, and AI image/video models are two-dimensional predictors. These models have become quite good at predicting in their respective dimensions, but they don’t really understand the three-dimensional world. Because of this, modern AI systems cannot do simple tasks that most humans can. LeCun notes how humans learn to clear a dinner table by the age of 10, and drive a car by 17 – and learn both in a matter of hours. But even the world’s most advanced AI systems today, built on thousands or millions of hours of data, can’t reliably operate in the physical world. https://techcrunch.com/2024/10/16/metas-ai-chief-says-world-models-are-key-to-human-level-ai-but-it-might-be-10-years-out/
2 3 4 5 6 7 8 Next Last