JIT
Créer un égaliseur audio avec interface graphique intuitive sous Windows 10
Utiliser ces bibliotheques : tkinter, numpy, scipy, pydub, simpleaudio, soundfile, matplotlib pour creer un equaliseur audio avec une interface graphique intuitive affichant tout les reglages possible : volume, effet binaurale accoustic, bass boost, 3D surround, son 8D...etc... Avec option d'ouvrir des fichiers audios et qu'il sous windows 10 et qu'il fonctionne sans internet.

python import tkinter as tk from tkinter import filedialog, simpledialog, messagebox import simpleaudio as sa import numpy as np from pydub import AudioSegment from pydub.playback import play import scipy.signal as signal import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg import threading import time

--- Global Variables ---

current_audio = None play_object = None is_playing = False current_playback_thread = None

--- Audio Processing Functions ---

def apply_volume(audio_segment, volume_db): """Applies a volume change in dB.""" return audio_segment + volume_db

def apply_bass_boost(audio_segment, gain_db=10, cutoff_hz=100): """Applies a simple bass boost using a low-shelf filter.""" if not audio_segment: return None

samples = np.array(audio_segment.get_array_of_samples())
if audio_segment.sample_width == 2: # 16-bit
    samples = samples.astype(np.int16)
elif audio_segment.sample_width == 4: # 32-bit
     samples = samples.astype(np.int32)
# Convert to float for processing
samples = samples.astype(np.float64) / (2**(audio_segment.sample_width * 8 - 1))

# Reshape for stereo if necessary
if audio_segment.channels == 2:
    samples = samples.reshape((-1, 2))

sample_rate = audio_segment.frame_rate
nyquist = 0.5 * sample_rate
normal_cutoff = cutoff_hz / nyquist
gain_linear = 10**(gain_db / 20.0)

# Design a Butterworth low-shelf filter
# This is a simplified approach; a proper shelf filter design is more complex
# Using a simple low-pass and mixing can approximate, or use IIR filter design
# Let's use a simple IIR filter design for a shelf
# This requires a more specific filter type or manual pole/zero placement
# For simplicity, let's demonstrate with a basic biquad filter structure if available or simulate one.
# SciPy's `iirfilter` can design standard filters, but a shelf is specific.
# A common way is to use a biquad filter implementation.
# Let's use a simple EQ approach by boosting low frequencies via FFT/IFFT for demonstration,
# as shelf filter design is non-trivial with standard scipy.signal.

# Alternative: Simple low-pass filter and mix (not a true shelf)
# b, a = signal.butter(4, normal_cutoff, btype='low', analog=False)
# low_freq = signal.lfilter(b, a, samples, axis=0)
# processed_samples = samples + low_freq * (gain_linear - 1) # Mix original with boosted low-pass

# More accurate: Frequency domain manipulation (requires FFT/IFFT)
if audio_segment.channels == 1:
    samples = samples.flatten()
elif audio_segment.channels == 2:
    # Process channels separately or together
    pass # Keep as is for now

# Warning: FFT/IFFT processing can introduce artifacts and is computationally intensive
# This is a conceptual example, not production-ready EQ
try:
    if audio_segment.channels == 1:
        fft_data = np.fft.rfft(samples)
        freqs = np.fft.rfftfreq(len(samples), 1/sample_rate)
        # Create a gain curve: flat above cutoff, boosted below
        gain_curve = np.ones_like(freqs)
        gain_curve[freqs < cutoff_hz] = gain_linear
        fft_data_processed = fft_data * gain_curve
        processed_samples = np.fft.irfft(fft_data_processed, n=len(samples))
    elif audio_segment.channels == 2:
         # Process stereo channels - simple approach: apply same gain curve to both
         processed_samples = np.zeros_like(samples)
         for i in range(audio_segment.channels):
             channel_samples = samples[:, i]
             fft_data = np.fft.rfft(channel_samples)
             freqs = np.fft.rfftfreq(len(channel_samples), 1/sample_rate)
             gain_curve = np.ones_like(freqs)
             gain_curve[freqs < cutoff_hz] = gain_linear
             fft_data_processed = fft_data * gain_curve
             processed_samples[:, i] = np.fft.irfft(fft_data_processed, n=len(channel_samples))

except Exception as e:
    print(f"Error during FFT/IFFT processing: {e}")
    # Fallback to original samples if processing fails
    processed_samples = samples


# Convert back to original data type and format
processed_samples = np.clip(processed_samples, -1.0, 1.0) # Clip to valid range
if audio_segment.sample_width == 2:
    processed_samples = (processed_samples * (2**15 - 1)).astype(np.int16)
elif audio_segment.sample_width == 4:
    processed_samples = (processed_samples * (2**31 - 1)).astype(np.int32)

# Reshape back to 1D if mono
if audio_segment.channels == 1:
    processed_samples = processed_samples.flatten()

# Create a new AudioSegment from processed samples
processed_audio_segment = AudioSegment(
    processed_samples.tobytes(),
    frame_rate=sample_rate,
    sample_width=audio_segment.sample_width,
    channels=audio_segment.channels
)

return processed_audio_segment

def apply_binaural_effect(audio_segment, delay_ms=0.5, gain_reduction_db=-3): """ Simulates a simple binaural effect (crossfeed) for stereo audio. Mixes a delayed and attenuated version of the left channel into the right, and vice-versa. This is a very basic simulation, not true HRTF-based binaural. """ if not audio_segment or audio_segment.channels != 2: print("Binaural effect requires stereo audio.") return audio_segment # Return original if not stereo

samples = np.array(audio_segment.get_array_of_samples())
if audio_segment.sample_width == 2: # 16-bit
    samples = samples.astype(np.int16)
elif audio_segment.sample_width == 4: # 32-bit
     samples = samples.astype(np.int32)
# Convert to float for processing
samples = samples.astype(np.float64) / (2**(audio_segment.sample_width * 8 - 1))

# Reshape for stereo
samples = samples.reshape((-1, 2))

sample_rate = audio_segment.frame_rate
delay_samples = int(sample_rate * (delay_ms / 1000.0))
gain_linear = 10**(gain_reduction_db / 20.0)

processed_samples = np.copy(samples)
num_samples = samples.shape[0]

# Apply crossfeed with delay and gain
# Left channel receives delayed/attenuated Right
if delay_samples < num_samples:
    processed_samples[delay_samples:, 0] += samples[:-delay_samples, 1] * gain_linear
    # Right channel receives delayed/attenuated Left
    processed_samples[delay_samples:, 1] += samples[:-delay_samples, 0] * gain_linear

# Convert back to original data type and format
processed_samples = np.clip(processed_samples, -1.0, 1.0) # Clip to valid range
if audio_segment.sample_width == 2:
    processed_samples = (processed_samples * (2**15 - 1)).astype(np.int16)
elif audio_segment.sample_width == 4:
    processed_samples = (processed_samples * (2**31 - 1)).astype(np.int32)

# Reshape back to 1D
processed_samples = processed_samples.flatten()

# Create a new AudioSegment from processed samples
processed_audio_segment = AudioSegment(
    processed_samples.tobytes(),
    frame_rate=sample_rate,
    sample_width=audio_segment.sample_width,
    channels=audio_segment.channels
)

return processed_audio_segment

Note on 3D Surround, 8D Sound, Acoustic effects:

Implementing these accurately is highly complex.

3D/8D typically involves HRTF (Head-Related Transfer Functions) which are complex filters

specific to how sound reaches your ears from different directions. This requires

specialized libraries or extensive signal processing knowledge to implement from scratch.

"Acoustic" effects are vague but could involve reverb, echo, or EQ adjustments

simulating different environments. Reverb/echo can be done with delays and feedback,

but simulating a specific acoustic space is complex.

These effects are beyond the scope of a simple example using the requested libraries

for full, convincing implementation. The code above provides examples for Volume,

a basic Bass Boost (using FFT/IFFT for demonstration), and a simple Binaural crossfeed.

More advanced effects would require significantly more complex signal processing code.

def apply_effects(audio_segment, volume_db, bass_boost_gain_db, bass_boost_cutoff_hz, binaural_enabled): """Applies selected effects in sequence.""" if audio_segment is None: return None

processed_audio = audio_segment

# Apply Volume
processed_audio = apply_volume(processed_audio, volume_db)

# Apply Bass Boost
if bass_boost_gain_db > 0: # Only apply if gain is positive
     processed_audio = apply_bass_boost(processed_audio, bass_boost_gain_db, bass_boost_cutoff_hz)

# Apply Binaural Effect (if enabled and stereo)
if binaural_enabled:
     processed_audio = apply_binaural_effect(processed_audio)


# Add other effects here as functions are implemented
# Example: processed_audio = apply_3d_surround(processed_audio, ...)
# Example: processed_audio = apply_acoustic(processed_audio, ...)

return processed_audio

--- GUI Functions ---

def open_file(): """Opens a file dialog and loads the selected audio file.""" global current_audio, is_playing, play_object

if is_playing and play_object:
    play_object.stop()
    is_playing = False
    if current_playback_thread and current_playback_thread.is_alive():
         current_playback_thread.join(timeout=0.1) # Wait briefly for thread to finish

filepath = filedialog.askopenfilename(
    filetypes=[("Audio Files", "*.mp3 *.wav *.flac *.ogg *.aac *.wma")]
)
if not filepath:
    return

try:
    # pydub requires ffmpeg installed and in PATH for many formats
    current_audio = AudioSegment.from_file(filepath)
    print(f"Loaded: {filepath}")
    print(f"Channels: {current_audio.channels}, Sample Rate: {current_audio.frame_rate}, Sample Width: {current_audio.sample_width}")
    update_waveform_plot(current_audio)
    status_label.config(text=f"Loaded: {filepath.split('/')[-1]}")
    # Reset sliders or apply default values
    volume_slider.set(0) # Default volume
    bass_boost_slider.set(0) # Default bass boost
    binaural_var.set(0) # Default binaural off

except Exception as e:
    messagebox.showerror("Error Loading File", f"Could not load audio file:\n{e}\n\nMake sure ffmpeg is installed and in your system's PATH.")
    current_audio = None
    status_label.config(text="Failed to load file.")
    update_waveform_plot(None)

def play_audio(): """Plays the currently loaded audio with applied effects.""" global current_audio, play_object, is_playing, current_playback_thread

if current_audio is None:
    messagebox.showwarning("No Audio", "Please load an audio file first.")
    return

if is_playing:
    stop_audio() # Stop current playback before starting new one
    # Wait briefly for the stop thread to finish if it was running
    if current_playback_thread and current_playback_thread.is_alive():
         current_playback_thread.join(timeout=0.1)


# Get current effect settings from sliders/checkboxes
volume_db = volume_slider.get()
bass_boost_gain_db = bass_boost_slider.get()
bass_boost_cutoff_hz = 100 # Example fixed cutoff for bass boost
binaural_enabled = binaural_var.get() == 1

# Apply effects
processed_audio = apply_effects(
    current_audio,
    volume_db,
    bass_boost_gain_db,
    bass_boost_cutoff_hz,
    binaural_enabled
)

if processed_audio is None:
    messagebox.showerror("Processing Error", "Failed to process audio.")
    return

# Playback using simpleaudio in a separate thread to keep GUI responsive
def playback_task(audio_segment):
    global play_object, is_playing
    try:
        is_playing = True
        status_label.config(text="Playing...")
        play_object = sa.play(audio_segment.raw_data,
                              num_channels=audio_segment.channels,
                              bytes_per_sample=audio_segment.sample_width,
                              sample_rate=audio_segment.frame_rate)
        play_object.wait_done() # Wait until playback has finished
        is_playing = False
        status_label.config(text="Playback finished.")
    except Exception as e:
        print(f"Playback error: {e}")
        is_playing = False
        status_label.config(text="Playback error.")

current_playback_thread = threading.Thread(target=playback_task, args=(processed_audio,))
current_playback_thread.start()

def stop_audio(): """Stops the current playback.""" global play_object, is_playing if is_playing and play_object: play_object.stop() is_playing = False status_label.config(text="Stopped.") # No need to join thread here, playback_task will exit after stop()

def update_waveform_plot(audio_segment): """Updates the matplotlib plot with the audio waveform.""" ax.clear() if audio_segment: samples = np.array(audio_segment.get_array_of_samples()) if audio_segment.sample_width == 2: samples = samples.astype(np.int16) elif audio_segment.sample_width == 4: samples = samples.astype(np.int32)

    # Take a small slice for performance if audio is very long
    display_samples = samples
    max_samples_to_display = 44100 * 10 # Display max 10 seconds
    if len(samples) > max_samples_to_display:
         display_samples = samples[:max_samples_to_display]
         ax.set_title("Waveform (first 10s)")
    else:
         ax.set_title("Waveform")

    time_axis = np.linspace(0, len(display_samples) / audio_segment.frame_rate, len(display_samples))

    if audio_segment.channels == 1:
        ax.plot(time_axis, display_samples)
    elif audio_segment.channels == 2:
        # Plot stereo channels
        samples_stereo = display_samples.reshape((-1, 2))
        time_axis_stereo = np.linspace(0, len(samples_stereo) / audio_segment.frame_rate, len(samples_stereo))
        ax.plot(time_axis_stereo, samples_stereo[:, 0], label='Left', alpha=0.7)
        ax.plot(time_axis_stereo, samples_stereo[:, 1], label='Right', alpha=0.7)
        ax.legend()

    ax.set_xlabel("Time (s)")
    ax.set_ylabel("Amplitude")
    ax.grid(True)
else:
    ax.set_title("Load audio to see waveform")
    ax.set_xlabel("Time (s)")
    ax.set_ylabel("Amplitude")


canvas.draw()

--- GUI Setup ---

root = tk.Tk() root.title("Simple Audio Equalizer")

Frame for controls

control_frame = tk.Frame(root) control_frame.pack(pady=10)

File operations

open_button = tk.Button(control_frame, text="Open Audio File", command=open_file) open_button.grid(row=0, column=0, padx=5)

play_button = tk.Button(control_frame, text="Play", command=play_audio) play_button.grid(row=0, column=1, padx=5)

stop_button = tk.Button(control_frame, text="Stop", command=stop_audio) stop_button.grid(row=0, column=2, padx=5)

Status label

status_label = tk.Label(root, text="Load an audio file.") status_label.pack()

Frame for effects sliders

effects_frame = tk.LabelFrame(root, text="Audio Effects") effects_frame.pack(pady=10, padx=10, fill="x")

Volume Control

tk.Label(effects_frame, text="Volume (dB):").grid(row=0, column=0, padx=5, pady=5, sticky="w") volume_slider = tk.Scale(effects_frame, from_=-30, to=10, orient=tk.HORIZONTAL, length=200, resolution=0.5) volume_slider.set(0) # Default 0 dB volume_slider.grid(row=0, column=1, padx=5, pady=5, sticky="ew")

Bass Boost Control

tk.Label(effects_frame, text="Bass Boost (dB):").grid(row=1, column=0, padx=5, pady=5, sticky="w") bass_boost_slider = tk.Scale(effects_frame, from_=0, to=20, orient=tk.HORIZONTAL, length=200, resolution=1) bass_boost_slider.set(0) # Default 0 dB bass_boost_slider.grid(row=1, column=1, padx=5, pady=5, sticky="ew") tk.Label(effects_frame, text="(Boosts frequencies below ~100Hz)").grid(row=1, column=2, padx=5, pady=5, sticky="w")

Binaural Effect (Simple Crossfeed)

binaural_var = tk.IntVar() binaural_check = tk.Checkbutton(effects_frame, text="Simple Binaural (Crossfeed)", variable=binaural_var) binaural_check.grid(row=2, column=0, columnspan=2, padx=5, pady=5, sticky="w") tk.Label(effects_frame, text="(Requires Stereo Audio)").grid(row=2, column=2, padx=5, pady=5, sticky="w")

Placeholder for other effects (3D, 8D, Acoustic, etc.)

These are complex and require significant implementation effort.

tk.Label(effects_frame, text="Advanced Effects (3D, 8D, Acoustic, etc.) are highly complex and not fully implemented in this example.").grid(row=3, column=0, columnspan=3, padx=5, pady=5, sticky="w")

Configure column weights so sliders expand

effects_frame.grid_columnconfigure(1, weight=1)

Matplotlib plot area

fig, ax = plt.subplots(figsize=(8, 3)) canvas = FigureCanvasTkAgg(fig, master=root) canvas_widget = canvas.get_tk_widget() canvas_widget.pack(pady=10, padx=10, fill="both", expand=True)

Initial plot setup

update_waveform_plot(None)

--- Main Loop ---

def on_closing(): """Handles window closing, stops playback.""" global is_playing, play_object if is_playing and play_object: play_object.stop() root.destroy()

root.protocol("WM_DELETE_WINDOW", on_closing) root.mainloop()

Instructions:

  1. Save: Save the code above as a Python file (e.g., equalizer.py).
  2. Install Libraries: Open a command prompt or terminal and install the required libraries using pip:
    pip install simpleaudio numpy pydub scipy matplotlib soundfile
    
    (Note: soundfile is included in the install command but pydub is primarily used for loading in this example. It's good practice to have it if needed for other formats or operations.)
  3. Install FFmpeg: pydub requires the ffmpeg executable to be installed on your system and accessible in your system's PATH for loading most audio formats (like MP3, AAC, WMA). Download ffmpeg from https://ffmpeg.org/download.html and follow instructions to add its bin directory to your system's PATH environment variable on Windows 10. This step does require internet for the initial download, but the application runs offline once ffmpeg is installed.
  4. Run: Execute the Python script from your command prompt or terminal:
    python equalizer.py
    

This will open the GUI application. You can then use the "Open Audio File" button to load a file, adjust the sliders and checkboxes for basic effects (Volume, Bass Boost, Simple Binaural Crossfeed), and click "Play" to hear the result. The waveform of the loaded audio will be displayed.

Important Considerations:

  • Advanced Effects (3D, 8D, Acoustic): As noted in the code comments, implementing complex spatial audio effects (like true 3D surround or 8D sound) and detailed acoustic simulations is significantly more involved than what can be provided in a simple example using these libraries. It typically requires deep knowledge of digital signal processing, potentially using HRTFs, convolution reverbs, or specialized audio APIs. The provided code includes placeholders and explanations for these.
  • Performance: Processing audio, especially with effects like FFT/IFFT for EQ, can be computationally intensive for long audio files. The waveform display is limited to the first 10 seconds for performance.
  • FFmpeg: The dependency on ffmpeg is crucial for pydub to handle various audio formats. Ensure it's installed and in the PATH.
  • Real-time Processing: This example applies effects before playback. Real-time processing (applying effects while the audio is playing) is much more complex and typically requires lower-level audio APIs (like PortAudio, which simpleaudio uses internally but doesn't expose for easy real-time stream manipulation) and careful buffering to avoid glitches._
Preview not available