4.2 C
United States of America
Saturday, December 28, 2024

Getting Began with Qwen2.5-Math


Over the previous couple of years, important progress has been made in researching and enhancing the reasoning capabilities of massive language fashions, with a powerful deal with enhancing their proficiency in fixing
arithmetic and mathematical issues.

A mannequin with good arithmetic and mathematical reasoning may also help in :

  • Personalised Studying: AI-powered tutors can adapt to particular person college students’ wants, elping them perceive advanced mathematical ideas extra successfully.
  • Drawback Fixing Help: Automating step-by-step explanations for fixing issues improves pupil engagement and comprehension.
  • Curriculum Design: Creating adaptive and progressive studying modules in topics like algebra and calculus.

This text explores how developments in mathematical reasoning are driving improvements in AI fashions like Qwen2.5-Math and its functions in personalised studying, problem-solving, and curriculum design.

Studying Aims

  • Perceive and discover the Qwen2.5-Math collection and its elements.
  • Study Qwen2.5-Math mannequin structure.
  • Acquire hands-on publicity on Qwen2.5-Math with examples.
  • Study in regards to the efficiency of Qwen2.5-Math on varied benchmarks.

What’s Qwen2.5-Math?

The Qwen2.5-Math collection is the newest addition to Alibaba Cloud’s Qwen collection of open-source, math-specific massive language fashions. It follows the sooner launch of Qwen2-Math, a collection of specialised mathematical language fashions primarily based on the Qwen2 LLMs. These fashions display superior mathematical capabilities, surpassing each open-source options and even some closed-source fashions like GPT-4o.

This collection demonstrates important efficiency enhancements over the Qwen2-Math collection on Chinese language and English arithmetic benchmarks. Whereas this collection applies Chain-of-Thought(CoT) to unravel English-specific math issues solely, the Qwen2.5-Math collection expands its capabilities by incorporating each CoT and Instrument-Built-in Reasoning (TIR), to sort out math issues in each Chinese language and English successfully.

Qwen2.5-Math vs Qwen2-Math

The comparability between Qwen2.5-Math and Qwen2-Math highlights the developments in mathematical reasoning and problem-solving capabilities achieved within the newest iteration of Alibaba Cloud’s math-specific language fashions.

Property Qwen2-Math Qwen2.5-Math
Pre-training knowledge dimension 700B tokens (from Qwen Math Corpus v1) Over 1T tokens (from Qwen Math Corpus v2)
Languages supported English English and Chinese language
Strategy Chain-of-Thought (COT) Chain-of-Thought (COT), Instrument-integrated Reasoning (TIR)
Benchmark Rating (GSM8K, Math, and MMLU-STEM) 89.1, 60.5, 79.1 90.8, 66.8, 82.8
Mannequin Variants Qwen2-Math-1.5B/7B/72B Qwen2.5-Math-1.5B/7B/72B

Optimizing Coaching Information

The Qwen2.5-Math collection is skilled utilizing the Qwen Math Corpus v2, comprising over 1 trillion high-quality mathematical knowledge tokens in each English and Chinese language. This dataset contains artificial mathematical knowledge generated utilizing the Qwen2-Math-72B-Instruct mannequin and aggregated mathematical Chinese language knowledge sourced from internet content material, books, and code repositories by way of a number of recall cycles.

Chain-of-Thought (CoT) Dataset

The chain-of-thought (CoT) dataset for Qwen2.5-Math is a complete assortment of mathematical issues geared toward enhancing the reasoning capabilities of the mannequin. It contains:

  • 580k English and 500k mathematical issues, together with each annotated and synthesized gadgets.
  • The Annotated knowledge derived from sources like GSM8K, MATH, and NuminaMath.

Instrument-Built-in Reasoning (TIR) Dataset

To deal with the computational and algorithmic challenges confronted by CoT prompting—comparable to fixing quadratic equations or computing eigenvalues—the tool-integrated reasoning (TIR) dataset was launched. This dataset enhances the mannequin’s proficiency in symbolic manipulation and exact calculations by enabling it to make use of a Python interpreter for reasoning duties. It contains:

  • 190k issues from benchmarks like GSM8K, MATH, CollegeMath, and NuminaMath.
  • 205k issues created utilizing methods from MuggleMath and DotaMath to evolve queries inside GSM8K and MATH coaching units.

Environment friendly Mannequin Coaching

Because the Qwen2.5-Math mannequin is the upgraded model of the Qwen2-Math mannequin so its coaching is derived from Qwen2-Math as follows:

  • Qwen2-Math fashions prepare on Qwen Math Corpus v1, a high-quality dataset that comprises roughly 700 billion tokens of mathematical content material.
  • Builders prepare a math-specific reward mannequin, Qwen2-Math-RM, derived from the Qwen2-Math-72B mannequin.
  • The Qwen2.5 collection base fashions serve for parameter initialization, enhancing language understanding, code technology, and textual content reasoning capabilities.
  • After coaching the bottom Qwen2.5-Math mannequin, builders prepare a math-specific reward mannequin, Qwen2.5-Math-RM-72B, primarily based on Qwen2.5-Math-72B. This reward mannequin evolves the SFT knowledge by way of Rejection Sampling for the SFT mannequin (Qwen2.5-Math-SFT).
  • An instruct mannequin (Qwen2.5-Math-Instruct) is constructed on the finish to shine the standard of responses. This mannequin is created by way of an extra iteration utilizing the Qwen2-Math-Instruct fashions and Qwen2.5-Math-RM-72B. The method incorporates Instrument-Built-in Reasoning (TIR) knowledge and SFT knowledge, refined by way of Group Relative Coverage Optimization (GRPO), to additional polish the mannequin’s efficiency.

Optimizing Mannequin Efficiency

Enhancing mannequin efficiency is vital to delivering quicker, extra correct outcomes, making certain effectivity and reliability in functions.

Base Fashions Efficiency

The bottom fashions Qwen2.5-Math-1.5B/7B/72B achieved important enhancements on English math benchmarks (GSM8K, Math, and MMLU-STEM) and Chinese language math benchmarks (CMATH, GaoKao Math Cloze, and GaoKao Math QA) as in contrast Qwen2-Math-1.5B/7B/72B.

Base Models Benchmark Results

For instance, Qwen2.5-Math-1.5B/7B/72B fashions present important enchancment of 5.4, 5.0, 6.3 on MATH, and rating enchancment of three.4, 12.2, 19.8 on GaoKao Math QA.

Instruction-tuned Fashions Efficiency

The Qwen2.5-Math-72B-Instruct mannequin outperformed each open-source fashions and high closed-source fashions, comparable to GPT-4o and Gemini Math-Specialised 1.5 Professional.

Instruction-tuned Models Performance

The Qwen2.5-Math-72B-Instruct mannequin surpasses its predecessor (the Qwen2-Math-72B-Instruct mannequin) by a mean of 4.4 factors in English and 6.1 factors in Chinese language. This efficiency marks its place because the main open-source mathematical mannequin out there immediately.

On the extraordinarily difficult benchmarks comparable to AIME 2024 and AMC23, fashions like Claude3 Opus, GPT-4 Turbo, and Gemini 1.5 Professional clear up only one or 2 out of 30 issues. In distinction, Qwen2.5-Math-72B-Instruct demonstrates exceptional efficiency, fixing 9 issues in Grasping decoding CoT mode and 12 issues in TIR mode. Moreover, with the help of the reward mannequin (RM), Qwen2.5-Math-7B-Instruct achieves a powerful 21 solved issues, showcasing its superior mathematical problem-solving capabilities.

Qwen2.5-Math vs Closed-source models

Working Demo

Let’s see the Qwen2.5-Math demo utilizing the HuggingFace house right here.

This house offers a web-based consumer interface to enter mathematical or arithmetic issues in both picture or textual content format for testing the mannequin’s capabilities.

To help multi-modalities this house makes use of Qwen2-VL for OCR and Qwen2.5-Math for mathematical reasoning.

Qwen-VL (Qwen Giant Imaginative and prescient Language Mannequin) is the multimodal imaginative and prescient language mannequin that helps photos, textual content as inputs. It naturally helps English and Chinese language to carry out varied image-to-text technology duties like picture captioning, visible question-answering, visible reasoning, textual content recognition, and many others.

Qwen-VL collection comprises many fashions comparable to Qwen-VL, Qwen-VL-Chat, Qwen-VL-Plus, Qwen-VL-Max
and many others. Qwen-VL-Max is Qwen’s most Succesful massive Visible Language mannequin for delivering optimum efficiency on a good broader vary of advanced duties.

The system makes use of the qwen-vl-max-0809 mannequin to grasp, course of, and extract textual info from the enter photos. The process_image() operate first receives the enter picture and extracts the math-related content material, making certain correct transcription of any LaTeX formulation. The system then applies the next customary immediate to extract the textual, math-related content material from the picture.

The immediate instructs: “Describe the math-related content material on this picture, making certain correct transcription of any LaTeX formulation. Don’t describe non-mathematical particulars.”

import os

os.system('pip set up dashscope -U')
import tempfile
from pathlib import Path
import secrets and techniques
import dashscope
from dashscope import MultiModalConversation, Technology
from PIL import Picture



YOUR_API_TOKEN = os.getenv('YOUR_API_TOKEN')
dashscope.api_key = YOUR_API_TOKEN
math_messages = []
def process_image(picture, shouldConvert=False):

    international math_messages
    math_messages = [] # reset when add picture
    uploaded_file_dir = os.environ.get("GRADIO_TEMP_DIR") or str(
        Path(tempfile.gettempdir()) / "gradio"
    )
    os.makedirs(uploaded_file_dir, exist_ok=True)
    

    title = f"tmp{secrets and techniques.token_hex(20)}.jpg"
    filename = os.path.be a part of(uploaded_file_dir, title)

    if shouldConvert:
        new_img = Picture.new('RGB', dimension=(picture.width, picture.top), coloration=(255, 255, 255))
        new_img.paste(picture, (0, 0), masks=picture)
        picture = new_img
    picture.save(filename)
    

    messages = [{
        'role': 'system',
        'content': [{'text': 'You are a helpful assistant.'}]
    }, {
        'function': 'consumer',
        'content material': [
            {'image': f'file://{filename}'},
            {'text': 'Please describe the math-related content in this image, ensuring that any LaTeX formulas are correctly transcribed. Non-mathematical details do not need to be described.'}
        ]
    }]
    
    response = MultiModalConversation.name(mannequin="qwen-vl-max-0809", messages=messages)
    

    os.take away(filename)
    
    return response.output.decisions[0]["message"]["content"]#import csv

Step2: Mathematical reasoning utilizing Qwen2.5-Math

This step extracts the picture description, which is then handed to the Qwen2.5 mannequin together with the consumer query to generate the response. The qwen2.5-math-72b-instruct mannequin performs the mathematical reasoning on this course of.

def get_math_response(image_description, user_question):
    international math_messages
    if not math_messages:
        math_messages.append({'function': 'system', 'content material': 'You're a useful math assistant.'})
    math_messages = math_messages[:1]
    if image_description shouldn't be None:
        content material = f'Picture description: {image_description}nn'
    else:
        content material=""
    question = f"{content material}Person query: {user_question}"
    math_messages.append({'function': 'consumer', 'content material': question})
    response = Technology.name(	
        mannequin="qwen2.5-math-72b-instruct",
        messages=math_messages,	
        result_format="message",
        stream=True
    )
    reply = None
    for resp in response:
        if resp.output is None:
            proceed
        reply = resp.output.decisions[0].message.content material
        yield reply.change("", "\")
    print(f'question: {question}nanswer: {reply}')
    if reply is None:
        math_messages.pop()
    else:
        math_messages.append({'function': 'assistant', 'content material': reply})

Having identified in regards to the fashions used on this house, let’s see some examples to
assess mannequin functionality to unravel mathematical or arithmetic issues.

Example1

An enter picture containing the next downside assertion –

Model Output - Example 1: Qwen2.5-Math

The mannequin finds the values of x as 5 and y as 2. It additionally offers step-by-step
pure language reasoning whereas discovering the values of x and y.

Example2

An enter picture containing the next downside assertion –

Model Output - Example 2: Qwen2.5-Math

The mannequin finds out the worth of the final expression as 50.

Example3

An enter picture containing the next downside assertion –

Model Output - Example 3: Qwen2.5-Math

The mannequin finds out the worth of the above expression as 5.

Conclusion

On this article, we explored Qwen2.5-Math—a collection of mathematical fashions with sturdy reasoning capabilities. We examined its elements, coaching knowledge, structure, and efficiency on varied customary benchmarks. Moreover, we reviewed the demo, testing it with a variety of reasonable to advanced examples.

Key Takeaways

  • The Qwen2.5-Math fashions help each Chinese language and English and showcase superior mathematical reasoning capabilities. It makes use of methods comparable to Chain-of-Thought (CoT) and Instrument-Built-in Reasoning (TIR).
  • The Qwen2.5 collection contains a number of variants primarily based on the variety of parameters, with fashions out there in 1.5B, 7B, and 72B parameters.
  • The Qwen2.5-Math fashions leverage 1 trillion tokens for pre-training, a considerable enhance in comparison with the 700 billion tokens used for Qwen2-Math.
  • Qwen2.5-Math surpasses Qwen2-Math throughout varied English and Chinese language benchmarks. Moreover, it outperforms fashions like Claude3 Opus, GPT-4 Turbo, and Gemini 1.5 Professional on difficult benchmarks comparable to AIME 2024.

Steadily Requested Questions

Q1. What’s the distinction between Qwen2.5-Math and Qwen2-Math?

A. Qwen2.5-Math is an upgraded model of Qwen2-Math, providing improved efficiency, higher accuracy in fixing advanced mathematical issues, and enhanced coaching methods.

Q2. Which mannequin performs higher for advanced mathematical duties, Qwen2.5-Math or Qwen2-Math?

A. Qwen2.5-Math sometimes outperforms Qwen2-Math on advanced duties as a result of its superior coaching and refined capabilities in mathematical reasoning.

Q3. How do Qwen2.5-Math and Qwen2-Math deal with mathematical reasoning?

A. Each fashions are designed for mathematical reasoning, however Qwen2.5 makes use of extra refined algorithms and coaching knowledge to unravel difficult issues extra successfully.

This fall. What’s the significance of coaching knowledge in Qwen2.5-Math vs Qwen2-Math?

A. Qwen2.5-Math advantages from a bigger and extra numerous dataset, which reinforces its capability to generalize and clear up advanced mathematical issues extra precisely than Qwen2-Math.

Q5. Are there any variations within the pace of processing between Qwen2.5-Math and Qwen2-Math?

A. Qwen2.5 optimizes quicker processing and offers faster responses in comparison with Qwen2-Math whereas sustaining excessive accuracy.

The media proven on this article shouldn’t be owned by Analytics Vidhya and is used on the Creator’s discretion.

A Information Science skilled with 7.5 years of expertise in knowledge science, machine studying, and programming. Fingers-on expertise in numerous domains like knowledge analytics, deep studying, large knowledge, and pure language processing.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles