-
阿里通义千问发布 Qwen2.5-Turbo AI 模型:支持 100 万 tokens 上下文,处理时间缩短至 68 秒
11 月 19 日消息,阿里通义千问昨日(11 月 18 日)发布博文,宣布在经过数月的优化和打磨后,针对社区中对更长上下文长度(Context Length)的要求,推出了 Qwen2.5-Turbo 开源 AI模型。 Qwen2.5-Turbo 将上下文长度从 12.8 万个扩展至 100 万个 tokens,这一改进相当于约 100 万英语单词或 150 万汉字,可以容纳 10 部完整小说、…- 530
-
Take a look at AI virtual digital people and inventory current open source projects on digital people
最近AI圈,数字人好家伙越来越靓仔了,各家都在推出“开源最强”的数字人 但,选择太多了,咋知道哪个适合自己呢?总不能“我+困难=放弃”,对吧? 不可!作为宠粉狂魔的我,不可能让大家面临如此窘境! 所以我果断出手! 为大家一次性把之前分享过的数字人相关整合包,做个盘点,包含实现的效果,需要的配置,生成的时间等等,让大家一口气看完目前开源数字人到底哪家强,一起选择最好的挖掘机! 数字人火火火! 要说A…- 1.5k
-
Ali Tongyi Thousand Questions open source Qwen2.5-Coder full range of models, claiming that the code ability to tie GPT-4o
11 月 12 日消息,阿里通义千问开源 Qwen2.5-Coder 全系列模型,其中 Qwen2.5-Coder-32B-Instruct 成为目前 SOTA 的开源代码模型,官方号称代码能力追平 GPT-4o。 Qwen2.5-Coder-32B-Instruct 作为本次开源的旗舰模型,在多个流行的代码生成基准(如 EvalPlus、LiveCodeBench、BigCodeBench)上都…- 737
-
Say Goodbye to Silent Movies: Smart Spectrum Launches New Clear Shadow, Generating 10-Second 4K60 Frame/Self-Audio Videos
智谱技术团队今天发布并开源最新版本的视频模型 CogVideoX v1.5,相比于原有模型,CogVideoX v1.5 将包含 5/10 秒、768P、16 帧的视频生成能力,I2V 模型支持任意尺寸比例,大幅提升图生视频质量及复杂语义理解。 官方介绍,CogVideoX v1.5 也将同步上线到“清影”平台,并与新推出的 CogSound 音效模型结合,“新清影”将具备如下特性: 质量提升:在…- 973
-
Meta Open Source Small-Language AI Models MobileLLM Family: Smartphone Friendly, 125M-1B Version Available
Meta 于上周发布新闻稿,宣布正式开源可在智能手机上运行的小语言模型 MobileLLM 家族,并同时为系列模型新增 600M、1B 和 1.5B 三种不同参数版本,附项目 GitHub 项目页如下(点此访问)。 Meta 研究人员表示,MobileLLM 模型家族专为智能手机打造,该模型号称采用了精简架构,并引入了“SwiGLU 激活函数”、“分组查询注意力(grouped-query att… -
Tencent Launches Hunyuan-Large Large Model: 389B Total Parameters, Industry's Largest Transformer-Based MoE Model Open-Sourced
Tencent announced the launch of the Hunyuan-Large model, which is the largest Transformer-based MoE model that has been open sourced in the industry, with 389 billion total parameters (389B) and 52 billion activation parameters (52B). Tencent has open sourced Hunyuan-A52B-Pretrain, Hunyuan-A52B-Instruct, and Hunyuan-A52B-Instruct-FP8 at Hugging Face, and released...- 673
-
ElevenLabs pushes open-source mini-project X-to-Voice: transforming Twitter accounts into personalized avatars with one click mp
Artificial intelligence company ElevenLabs recently released an open-source project called "X-to-Voice," a tool that intelligently analyzes Twitter user profiles and automatically generates digital voices and animated avatars that match a user's personality. The project integrates a number of cutting-edge technologies: ElevenLabs' self-developed voice design API is responsible for voice generation, while the Taedra tool is in charge of dynamic avatar creation. On the technical support side, the project uses Apify for profile and image data collection, Hedra for dynamic avatar...- 1.4k
-
World's first open source AI standard released, developed by Microsoft, Google, Amazon, Meta, Intel, Samsung and other giants
At the ALL THINGS OPEN 2024 conference at the end of this month, the open source organization Open Source Initiative (OSI) officially released the Open Source Artificial Intelligence Definition (OSAID) version 1.0, marking the birth of the world's first open source AI standard. Founded in 1998, OSI is a global non-profit organization that aims to define and "manage" all things open source. The OSAID standard was co-designed by more than 25 organizations, including Microsoft, Google, Amazon, Meta, Intel,...- 1.8k
-
OpenAI Opens New SimpleQA Benchmark to Cure Big Models of "Nonsense"
On October 31, OpenAI announced that it is open-sourcing a new benchmark called SimpleQA, which measures the ability of language models to answer short fact-seeking questions, in order to measure the accuracy of language models. One of the open challenges in AI is how to train models to generate factually correct answers. Current language models sometimes produce incorrect output or unsubstantiated answers, a problem known as "hallucinations". Language models that can generate more accurate and less illusory answers are more reliable and can be used...- 1.8k
-
Google DeepMind opens SynthID Text tool to recognize AI-generated text
Google DeepMind announced on October 23 that it has officially open-sourced its SynthID Text text watermarking tool for free use by developers and businesses. Google launched the SynthID tool in August 2023, which has the ability to create AI content watermarks (declaring that the work was created by AI) and recognize AI-generated content. It can embed digital watermarks directly into AI-generated images, sound, text, and video without compromising the original content, as well as scanning that content for existing digital water...- 1.4k
-
Wizen Robotics Announces Global Open Source of Rhinoceros X1, a Startup Program of "Wizen"
October 24th, Zhiyuan Robotics announced today that "Rhinoceros X1" is officially open-sourced for the world, and a full set of drawings and code for the hardware and software are online on GitHub, and the development guide is online on the official website of Zhiyuan Robotics. Zhiyuan Robotics official said, as the industry's first full-stack open source humanoid robot drawings and code company, the open source will unreservedly provide "one-stop" software and hardware technology resources, the total size of the material more than 1.2GB. In the machine structure hardware, open source content includes detailed machine structure drawings, hardware block diagrams and bill of materials (BOM), installation instructions, and the machine. (The open source content includes detailed drawings of the whole machine structure, hardware block diagrams and bill of materials (BOM), and instructions for installing the machine. ...- 1.5k
-
Open Source Venn diagram AI heavyweights are new: Stable Diffusion 3.5 arrives in a bucket, "out-of-the-box" on consumer-grade hardware
In a blog post yesterday (October 22), Stability AI announced the release of Stable Diffusion 3.5, which marks a significant advancement in open source AI graphical models. Stable Diffusion 3.5 is available in Medium (released on October 29), Large and Large Turbo sizes, designed to meet the different needs of scientific researchers, enthusiasts, startups and enterprises, with the following introduction: Stable Dif...- 1.5k
-
Wisdom Spectrum open source CogView3-Plus, related functions on the Wisdom Spectrum Clear Words App
Oct. 14, 2012 - Smart Spectrum's technical team announced today that it has open-sourced the text2img models CogView3 and CogView3-Plus-3B, and the capabilities of this series of models are now available on the Smart Spectrum Clear Words app. According to the introduction, CogView3 is a text2img model based on cascading diffusion. According to the introduction, CogView3 is a text2img model based on cascade diffusion, which consists of three stages as follows: Stage 1: Generate a 512x512 low-resolution image using the standard diffusion process. The second stage: using the relay diffusion process, the implementation of 2 times the super-resolution generation, from 512x512 ...- 2.1k
-
PearAI, which claims to be the open-source version of Cursor and just raised $3.5 million in funding, has been accused of plagiarism.
PearAI, an AI programming tool that describes itself as an "open source version of Cursor," recently announced that it has received $500,000 (about 3.5 million yuan) in funding from YCombinator. PearAI founder Duke Pan admitted that the product is actually a clone of another AI editor, Continue. Whereas Continue itself is a project based on the Apache open source license, PearAI is trying to build on that with a homegrown closed source license called "Pear Ent...- 5.1k
-
China Telecom AI Research Institute Completes the First Fully Localized Wankawansen Large Model Training, and TeleChat2-115B Is Open-Sourced to the Public
On September 28th, the official public number of "China Telecom Artificial Intelligence Research Institute" announced that China Telecom Artificial Intelligence Research Institute (hereinafter referred to as TeleAI) has successfully completed the first trillion-parameter large model based on the fully localized WANKA cluster training in China, and formally open-sourced the first trillion-parameter large model -- Star Semantic Large Model TeleChat2-115B, which was trained based on the fully localized WANKA cluster and the deep learning framework of China. TeleChat2-115B, the first 100 billion parameter large model trained on a fully localized Wanka cluster and a homegrown deep learning framework. Officially, this scientific achievement signifies that the training of homegrown large models has truly realized the replacement of all localization, and formally entered the new nationally produced independent innovation, safe and controllable...- 3.8k
-
How to play with Stable Diffusion, FLUX and other open source AI painting models? Using a cloud-based open source model painting platform
AI painting, can be said to have been quite mature Closed source model Midjourney, easy to use, can produce photographic works, the effect is awesome. It is a few dozen knives per month, a little bit of money, and there is poor scalability. Character scene consistency requirements, or want to use the workflow to do exclusive painting tool applications, open source model is YYDS, such as stable diffusion series, and the recent fire Flux, media platforms have been slaughtered by the two lists. The charm of open source, lies in the scalability, controllability, but also with workflow packaged products, and the current effect is constantly approaching Mid...- 3.6k
-
You can easily get started with the Dify open-source large model development platform, the combination of Agent and RAG to create a proprietary AI intelligent workbench
Dify is an open source platform for building AI applications.Dify combines the Backend as Service and LLMOps concepts. It supports a variety of large-scale language models, such as Claude3, OpenAI, etc., and cooperates with multiple model vendors to ensure that developers can choose the most suitable model according to their needs.Dify greatly reduces the complexity of AI application development by providing powerful dataset management features, visual Prompt orchestration, and application operation tools. Dify I. Dify What is ...- 5.2k
-
Llama 3.2, the strongest open-source AI model on the end-side, has been released: it can run on cell phones, from 1B plain text to 90B multimodal, and challenges OpenAI 4o mini.
In a September 25th blog post, Meta officially launched Llama 3.2 AI models, featuring open and customizable features that developers can tailor to their needs to implement edge AI and visual revolution. Offering multimodal vision and lightweight models, Llama 3.2 represents Meta's latest advancements in Large Language Models (LLMs), providing increased power and broader applicability across a variety of use cases. This includes small and medium-sized vision LLMs (11B and 90B) suitable for edge and mobile devices to... -
Ali Tongyi Qianqian open source Qwen2.5 large model, claiming performance beyond Llama
At the 2024 Cloud Root Conference, AliCloud CTO Zhou Jingren released Qwen2.5, a new generation of open source models, of which the flagship model, Qwen2.5-72B, is claimed to outperform the Llama 405B. Qwen2.5 covers multiple sizes of large language models, multimodal models, mathematical models, and code models, and each size has a base version, a command-following version, quantized versions, totaling more than 100 models on the shelf. Qwen2.5 language models: 0.5B, 1.5B, 3B, 7B, 14B, 32B, and ...- 1.8k
-
Mianbi Intelligent released the MiniCPM 3.0 client-side model: it can run with 2GB of memory and its performance exceeds GPT-3.5
The official WeChat account of Mianbi Intelligence published a blog post yesterday (September 5), announcing the launch of the open source MiniCPM3-4B AI model, claiming that "the end-side ChatGPT moment has arrived." This is an excellent AI model that can run on devices with only 2GB of memory, heralding a new era of end-side AI experience. The MiniCPM3.0 model has 4B parameters and outperforms GPT-3.5 in performance, and can achieve AI services on mobile devices at the same level as GPT-3.5. This allows users to enjoy fast, secure, and functional...- 10.5k
-
Zero One Everything opens source Yi-Coder series programming assistant models, supporting 52 programming languages
Zero One Everything announced today that it has open-sourced the Yi-Coder series of models, which are programming assistants in the Yi series of models. The Yi-Coder series of models are designed for coding tasks and provide two parameters: 1.5B and 9B. Among them, the performance of Yi-Coder-9B is said to be "better than other models with parameters below 10B", such as CodeQwen1.5 7B and CodeGeex4 9B, and can even be "comparable to DeepSeek-Coder 33B". According to the introduction,…- 4.7k
-
The most powerful open source AI model Zamba2-mini is released: 1.2 billion parameters, less than 700MB memory usage at 4-bit quantization
Zyphra published a blog post on August 27th, announcing the release of Zamba2-mini 1.2B model with 1.2 billion parameters, claiming that it is an end-side SOTA small language model with a memory footprint of less than 700MB at 4bit quantization. SOTA is a term used to refer to the state-of-the-art, which doesn't refer to a specific model, but rather the best/most advanced model available for this research task. SOTA refers to the state-of-the-art model, not a specific model, but the best/most advanced model in the research task. Zamba2-mini 1.2B is small in size, but it is comparable to the Google Gemm...- 3.7k
-
Zhipu AI open-sources CogVideoX-5B video generation model, which can be run on RTX 3060 graphics card
On August 28, Zhipu AI open-sourced the CogVideoX-5B video generation model. Compared with the previously open-sourced CogVideoX-2B, the official said that its video generation quality is higher and the visual effect is better. The official said that the inference performance of the model has been greatly optimized, and the inference threshold has been greatly reduced. CogVideoX-2B can be run on early graphics cards such as GTX 1080Ti, and CogVideoX-5B models can be run on desktop "dessert cards" such as RTX 3060. CogVideoX is a big...- 6.9k
-
NVIDIA releases new AI model with 8 billion parameters: high accuracy and efficiency, deployable on RTX workstations
NVIDIA released the Mistral-NeMo-Minitron 8B small-language AI model in a blog post on August 21st, featuring high accuracy and computational efficiency to run the model on GPU-accelerated data centers, clouds and workstations. NVIDIA and Mistral AI released the open-source Mistral NeMo 12B model last month, and based on that NVIDIA is once again releasing the smaller Mistral-NeMo-Minitron 8B model, with a total of 8 billion parameters, which can be run on...- 2.6k
❯
Search
Scan to open current page
Top
Checking in, please wait
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today!
My Coupons
-
¥CouponsLimitation of useExpired and UnavailableLimitation of use
before
Limitation of usePermanently validCoupon ID:×Available for the following products: Available for the following products categories: Unrestricted use:Available for all products and product types
No coupons available!
Unverify
Daily tasks completed: