Given an input RGB image and a text prompt describing the object of interest, we aim to detect the grasping pose on the image that best matches the text prompt input. We follow the popular rectangle grasp convention widely used in previous work to define the grasp.
In the diffusion model, we represent the target grasp pose as x0. The objective of our diffusion process of language-driven grasp detection involves denoising from a noisy state xT to the original grasp pose x0, conditioned on the input image and grasp instruction represented by y. The forward process in traditional conditional diffusion model is defined as:
q(xt∣xt−1)=N(1−βtxt−1,βtI),(1)
where the hyperparameter βₜ is the amount of noise added at diffusion step t ∈ [0,T] ⊆ ℝ.
To train a diffusion model with condition y, we use a neural network to learn the reverse process:
pϕ(xt−1∣xt,y)=N(μϕ(xt,t,y),Σϕ(xt,t,y)).(2)
In our approach, we utilize the diffusion process in the continuous domain, where xt is the grasp pose state at arbitrary time index t. Unlike popular discrete diffusion models, by using a continuous space, we can improve sample quality and reduce inference times due to the ability to traverse the diffusion process at arbitrary timesteps, allowing for more fine-grained control over the denoising process.
Figure 1: The overview of our method. First, the input RGB image and text prompt are fed into the feature encoder and ALBEF fusion. Subsequently, we concurrently train two models with the same architectures: A score network to estimate the probability flow Ordinary Differential Equation (ODE) trajectory for the diffusion process and a conditional consistency model to determine the grasp pose with a few denoising steps.
To reduce the inference time during the denoising step of the diffusion model, we aim to estimate the original grasp pose with just a few denoising steps. Since our language-driven grasp detection task has the condition y, we introduce a conditional consistency model based on the consistency concept to infer the original grasp pose during the inference process directly:
fθ(xt,t,y)={xtFθ(xt,t,y)t∈[0,ϵ]t∈(ϵ,T],(3)
where fθ(xϵ,t,y)=xϵ is the boundary condition, and Fθ(xt,t,y) is a free-form deep neural network whose output has the same dimensionality as xt.
To train our conditional consistency model, we employ knowledge distillation from a continuous diffusion process:
dxt=−21γtxtdt+γtdwt,(4)
where γt is a non-negative function referred to as the noise schedule, and wt is the standard Brownian motion. This forward process creates a trajectory of grasp poses {xt}t=0T. The grasp pose state xt depends on the time index t and the input image and text prompt. The grasp distribution p(x0∣y) from the dataset is transformed into p(xT∣y)∼N(0,I). Given the ground truth grasp pose x0, we can sample xt at arbitrary t:
p(xt∣x0)=N(μt,Σt),(5)
where
μt=e21ρtx0,Σt=(1−eρt)I,ρt=−∫0tγsds.
The equation (4) is a probability flow ODE. With the conditional variable y, it can be redefined as:
dtdxt=−21γt[xt+∇logp(xt∣y)],(6)
where ∇logp(xt∣y) is the score function of the conditional diffusion model.
Suppose that we have a neural network sϕ(xt,t,y) that can approximate the score function ∇logp(xt∣y), i.e., sϕ(xt,t,y)≈∇logp(xt∣y). After training the score network, we can replace the ∇logp(xt∣y) term in the equation (6) with a neural network:
dtdxt=−21γt[xt+sϕ(xt,t,y)].(7)
Score Function Loss. In order to approximate the score function ∇logp(xt∣y), the conditional denoising estimator minimizes the following objective:
If y and t are fixed, we can define a transition probability that does not depend on these variables, q(x0)=p(x0∣y), κ(xt)=sϕ(xt,t,y). According to Vincent P., 2011, we have:
Discretization. Consider discretizing the time horizon [ϵ,T] into N−1 with boundary t1=ϵ<t2<t3<…<tN=T. If N is sufficiently large, we can use an ODE-solver to estimate the next discretization step:
Conditional Consistency Model Loss. To enable fast sampling, we expect that the predicted point x^ti and xti+1 to lie on the same probability flow ODE trajectory. We propose conditional consistency loss to enforce this constraint:
The input of our network is the image and a corresponding grasping text prompt represented as e (for example, "grasp the fork at its handle"). We first extract the image feature using a 12-layer vision transformer ViT image encoder. The input text prompt is encoded by a text encoder using BERT or CLIP. We then combine and learn the features of the input text prompt and input image using the ALBEF fusion network. The output of the fusion features is fed into a score network, and our conditional consistency model is used to learn the grasp pose. Figure 1 shows the detail of our network.
Score Network. In practice, we utilize a score network composed of several MLP layers to extract three components: the noisy grasp pose xt, the time index t, and the conditional vision-language embedding y. Subsequently, these features are concatenated, and the score function is extracted through a final MLP layer. It is crucial to ensure that the output dimension of the scoring network is identical to the dimension of the input xt because, fundamentally, the score function is the gradient of the grasp pose distribution given the condition y. Our conditional consistency model's network has an architecture similar to the scoring network; however, its output is the predicted grasp pose.
Algorithm 1: Inference Process
Input: Image and text prompt, conditional consistency model fθ(x,t,y), number of inference steps P, sequence of time points t1=ϵ<t2<t3<⋯<tP=T, noise scheduler αt=eρt.
During training, we freeze the text and image encoder, then train the ALBEF fusion, the scoring network, and the consistency model end-to-end. The score network and the conditional consistency model share the same architecture. We trained both models simultaneously for 1000 epochs with a batch size of 8 using the Adam optimizer. The training time takes approximately three days on an NVIDIA A100 GPU. Regarding the parameters of the conditional consistency model, we empirically set T=1000, ϵ=1, and N=2000. After training the scoring network and the conditional consistency model fθ(xt,t,y), we can sample the grasp pose given the input image and language instruction prompt in a few denoising steps using our algorithm 1.