Since processing-3.0 does not include the minim sound library. But it can still be installed from the processing ide. Usage in JRubyArt is the same as other libraries ie you can use the JRubyArt load_library, load_libraries utility, but note that as with other java imports to use the classes you need to include_package to use a java package without full qualification.

Here is a simple frequency analysis sketch:-

# This sketch demonstrates how to use an FFT to analyze
# the audio being generated by an AudioPlayer.
# 
# FFT stands for Fast Fourier Transform, which is a
# method of analyzing audio that allows you to visualize
# the frequency content of a signal. You've seen
# visualizations like this before in music players
# and car stereos.
# 
# For more information about Minim and additional features,
# visit http://code.compartmental.net/minim/
load_library :minim
include_package 'ddf.minim'
include_package 'ddf.minim.analysis'

# re-open the FFT class to make the 'spectrum' array directly available
FFT.class_eval do
  # make the protected variable of the super class FourierTransform available
  field_reader :spectrum 
end

attr_reader :fft, :jingle, :minim, :ht

def settings
  size(512, 200)
end

def setup
  sketch_title 'Analyze'
  @ht = 8
  @minim = Minim.new(self)
  # specify that we want the audio buffers of the AudioPlayer
  # to be 1024 samples long because our FFT needs to have
  # a power-of-two buffer size and this is a good size.
  @jingle = minim.load_file('jingle.mp3', 1_024)
  # loop the file indefinitely
  jingle.loop
  # create an FFT object that has a time-domain buffer
  # the same size as jingle's sample buffer
  # note that this needs to be a power of two
  # and that it means the size of the spectrum will be half as large.
  @fft = FFT.new(jingle.buffer_size, jingle.sample_rate)
end

def draw
  background(0)
  stroke(255)
  # perform a forward FFT on the samples in jingle's mix buffer,
  # which contains the mix of both the left and right channels of the file
  fft.forward(jingle.mix)
  fft.spectrum.each_with_index do |band, i|
    # draw the line for frequency band i, scaling it up a bit so we can see it
    line(i, height, i, height - band * ht)
  end
end

def key_pressed
  case key
  when '+', 'i'
    @ht += 1
  when '-', 'd'
    @ht -= 1 unless ht < 4
  end
end

David Guttman created this ruby-processing sketch visualizer that I translated for JRubyArt (I also make use of JRubyArt normal and map1d to simplify the sketch)

norm

The norm(input, min, max) method in JRubyArt behaves as the vanilla processing norm which normalizes a number from another range into a value between 0 and 1. Numbers outside of the range are not clamped to 0 and 1. In JRubyArt if you want to clamp the output use strict_norm.

map1d

The map1d(input, range1, range2) method is the JRubyArt replacement for vanilla processings unfortunately named map method. Numbers outside of the range1 are not clamped, in JRubyArt if you want clamped values use constrained_map instead.

Code for the Visualizer sketch

# Visualizer
# After https://dry.ly/ruby-music-visualizer by David Guttman
# Load minim and include the packages we'll be using
load_library :minim
include_package 'ddf.minim'
include_package 'ddf.minim.analysis'

attr_reader :beat, :current_ffts, :dim, :fft, :freqs, :input
attr_reader :fft_smoothing, :max_ffts, :minim, :scaled_ffts

def settings
  size(1280, 100) # Let's pick a more interesting size
end

def setup
  sketch_title 'Visualizer'
  background 10 # Pick a darker background color
  setup_sound
end

def draw
  update_sound
  animate_sound
end

def animate_sound
  # This animation will be two circles with parameters controlled by FFT
  # values
  # For example, the first circle:
  # Horizontal position will be controlled by
  #   the FFT of 60hz (normalized against width)
  # Vertical position - 170hz (normalized against height)
  # Red, Green, Blue - 310hz, 600hz, 1khz (normalized against 255)
  # Size - 170hz (normalized against height), quadrupled on beat
  @dim = map1d(scaled_ffts[1], (0..1.0), (5..height)) 
  @dim *= 4 if beat.is_onset
  x1  = map1d(scaled_ffts[0], (1.0..0), (0..width / 2)) 
  y1  = map1d(scaled_ffts[1], (1.0..0), (0..height / 2)) 
  red1    = map1d(scaled_ffts[2], (1.0..0), (10..235))
  green1  = map1d(scaled_ffts[3], (1.0..0), (10..235)) 
  blue1   = map1d(scaled_ffts[4], (1.0..0), (10..235)) 
  fill red1, green1, blue1
  stroke red1 + 20, green1 + 20, blue1 + 20
  ellipse(x1, y1, dim, dim)
  x2 = map1d(scaled_ffts[5], (0..1.0), (width / 2..width))
  y2 = map1d(scaled_ffts[6], (0..1.0), (height / 2..height)) 
  red2    = map1d(scaled_ffts[7], (1.0..0), (10..235)) 
  green2  = map1d(scaled_ffts[8], (1.0..0), (10..235)) 
  blue2   = map1d(scaled_ffts[9], (1.0..0), (10..235)) 
  fill red2, green2, blue2
  stroke red2 + 20, green2 + 20, blue2 + 20
  ellipse(x2, y2, dim, dim)
end

def setup_sound
  # Creates a Minim object
  @minim = Minim.new(self)
  @input = @minim.get_line_in  
  # Gets FFT values from sound data
  @fft = FFT.new(@input.left.size, 44_100)
  # Lets Minim grab sound data from mic/soundflower
  # Our beat detector object
  @beat = BeatDetect.new
  # Set an array of frequencies we'd like to get FFT data for
  # I grabbed these numbers from VLC's equalizer
  @freqs = [60, 170, 310, 600, 1_000, 3_000, 6_000, 12_000, 14_000, 16_000]
  # Create arrays to store the current FFT values,
  # previous FFT values, highest FFT values we've seen,
  # and scaled/normalized FFT values (which are easier to work with)
  @current_ffts = Array.new(freqs.size, 0.001)
  @max_ffts = Array.new(freqs.size, 0.001)
  @scaled_ffts = Array.new(freqs.size, 0.001)
  # We'll use this value to adjust the 'smoothness' factor
  # of our sound responsiveness
  @fft_smoothing = 0.7
end

def update_sound
  fft.forward input.left
  previous_ffts = current_ffts
  # Iterate over the frequencies of interest and get FFT values
  freqs.each_with_index do |freq, i|
    # The FFT value for this frequency
    new_fft = fft.get_freq(freq)
    # Set it as the frequncy max if it's larger than the previous max
    max_ffts[i] = new_fft if new_fft > max_ffts[i]
    # Use our 'smoothness' factor and the previous FFT to set a current FFT
    current_ffts[i] =
      ((1 - fft_smoothing) * new_fft) + (fft_smoothing * previous_ffts[i])
    # Set a scaled/normalized FFT value that will be
    #   easier to work with for this frequency
    # scaled_ffts[i] = norm(current_ffts[i], 0, max_ffts[i])
    scaled_ffts[i] = norm(current_ffts[i], 0, max_ffts[i])
    # scaled_ffts[i] = 0 if scaled_ffts[i] < 1e-44
  end

  # Check if there's a beat, will be stored in beat.is_onset
  beat.detect(input.left)
end