<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Publications | Zachary Zhao</title><link>https://bobbed1999.github.io/publications/</link><atom:link href="https://bobbed1999.github.io/publications/index.xml" rel="self" type="application/rss+xml"/><description>Publications</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Wed, 01 Oct 2025 00:00:00 +0000</lastBuildDate><item><title>Causal Reinforcement Learning based Agent-Patient Interaction with Clinical Domain Knowledge</title><link>https://bobbed1999.github.io/publications/conference-paper/</link><pubDate>Wed, 01 Oct 2025 00:00:00 +0000</pubDate><guid>https://bobbed1999.github.io/publications/conference-paper/</guid><description>&lt;!-- &gt; [!NOTE]
&gt; Click the _Cite_ button above to demo the feature to enable visitors to import publication metadata into their reference management software.
&gt; [!NOTE]
&gt; Create your slides in Markdown - click the _Slides_ button to check out the example.
Add the publication's **full text** or **supplementary notes** here. You can use rich formatting such as including [code, math, and images](https://docs.hugoblox.com/content/writing-markdown-latex/). --&gt;</description></item><item><title>Speaking Memories: A Multimodal Adaptive Dialogue Framework for Reminiscence Robotics</title><link>https://bobbed1999.github.io/publications/preprint1/</link><pubDate>Sun, 28 Sep 2025 00:00:00 +0000</pubDate><guid>https://bobbed1999.github.io/publications/preprint1/</guid><description>&lt;p&gt;This work is driven by the results in my
on HRI.&lt;/p&gt;
&lt;!-- &gt; [!NOTE]
&gt; Create your slides in Markdown - click the *Slides* button to check out the example.
Add the publication's **full text** or **supplementary notes** here. You can use rich formatting such as including [code, math, and images](https://docs.hugoblox.com/content/writing-markdown-latex/). --&gt;</description></item><item><title>A Distributed Multimodal Robotic Framework for Emotion-Aware Reminiscence Dialogue in Dementia Care</title><link>https://bobbed1999.github.io/publications/conference-paper1/</link><pubDate>Mon, 01 Sep 2025 00:00:00 +0000</pubDate><guid>https://bobbed1999.github.io/publications/conference-paper1/</guid><description>&lt;!-- &gt; [!NOTE]
&gt; Click the _Cite_ button above to demo the feature to enable visitors to import publication metadata into their reference management software.
&gt; [!NOTE]
&gt; Create your slides in Markdown - click the _Slides_ button to check out the example.
&gt; [!NOTE]
&gt; Create your slides in Markdown - click the _Slides_ button to check out the example.
Add the publication's **full text** or **supplementary notes** here. You can use rich formatting such as including [code, math, and images](https://docs.hugoblox.com/content/writing-markdown-latex/). --&gt;
&lt;p&gt;We introduce an embodied robotic implementation of the \textbf{PARTNER} framework (Personalized AI and Robotics to Nurture Engaging Reminiscence), a distributed multimodal architecture for emotion-aware, personalized dialogue in socially assistive contexts. The framework has three components: a secure cloud portal for managing media, a local server for processing multimodal inputs, and an embodied robot client. PARTNER combines auditory, visual, and textual inputs using Whisper for speech transcription and a vision–language model (GPT-4o) that infers implicit affect from facial snapshots and dialogue history, rather than relying on rigid emotion classifiers. To enhance reproducibility and support future model training, PARTNER incorporates a real-time logging pipeline that synchronizes user inputs, sensor streams, and model outputs into a structured dataset.
We provide a system-level evaluation on our robot, measuring end-to-end command–response latency, transcription accuracy, and dialogue coherence under varied sensing and environmental conditions. Our experiments show sub-3,s loop latency on our testbed, robust transcription across various noise environments, and consistent responses during multi-turn dialogues, These findings validate PARTNER as a deployable platform for adaptive human–robot interaction. To our knowledge, PARTNER is the first Socially Assistive Robotics (SAR)-oriented system that (i) unifies a cloud portal for reminiscence media with a locally executed interaction server and an embodied agent, (ii) leverages VLM-based implicit affect cues for dialogue policy, and (iii) offers a real-time multimodal logging substrate to facilitate future domain-specific VLM/LLM fine-tuning.&lt;/p&gt;</description></item><item><title>Multimodal Perception-Driven Decision-Making for Human-Robot Interaction: A Survey</title><link>https://bobbed1999.github.io/publications/journal-article1/</link><pubDate>Tue, 05 Aug 2025 00:00:00 +0000</pubDate><guid>https://bobbed1999.github.io/publications/journal-article1/</guid><description>&lt;!-- &gt; [!NOTE]
&gt; Click the *Cite* button above to demo the feature to enable visitors to import publication metadata into their reference management software.
&gt; [!NOTE]
&gt; Create your slides in Markdown - click the *Slides* button to check out the example.
Add the publication's **full text** or **supplementary notes** here. You can use rich formatting such as including [code, math, and images](https://docs.hugoblox.com/content/writing-markdown-latex/). --&gt;</description></item><item><title>Interval Short-Term Traffic Flow Prediction Method Based on CEEMDAN-SE Nosie Reduction and LSTM Optimized by GWO</title><link>https://bobbed1999.github.io/publications/journal-article/</link><pubDate>Wed, 10 Aug 2022 00:00:00 +0000</pubDate><guid>https://bobbed1999.github.io/publications/journal-article/</guid><description>&lt;!-- &gt; [!NOTE]
&gt; Click the *Cite* button above to demo the feature to enable visitors to import publication metadata into their reference management software.
&gt; [!NOTE]
&gt; Create your slides in Markdown - click the *Slides* button to check out the example.
Add the publication's **full text** or **supplementary notes** here. You can use rich formatting such as including [code, math, and images](https://docs.hugoblox.com/content/writing-markdown-latex/). --&gt;</description></item><item><title>An example preprint / working paper</title><link>https://bobbed1999.github.io/publications/preprint/</link><pubDate>Sun, 07 Apr 2019 00:00:00 +0000</pubDate><guid>https://bobbed1999.github.io/publications/preprint/</guid><description>&lt;p&gt;This work is driven by the results in my
on LLMs.&lt;/p&gt;
&lt;div class="callout flex px-4 py-3 mb-6 rounded-md border-l-4 bg-blue-100 dark:bg-blue-900 border-blue-500"
data-callout="note"
data-callout-metadata=""&gt;
&lt;span class="callout-icon pr-3 pt-1 text-blue-600 dark:text-blue-300"&gt;
&lt;svg height="24" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"&gt;&lt;path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5" d="m16.862 4.487l1.687-1.688a1.875 1.875 0 1 1 2.652 2.652L6.832 19.82a4.5 4.5 0 0 1-1.897 1.13l-2.685.8l.8-2.685a4.5 4.5 0 0 1 1.13-1.897zm0 0L19.5 7.125"/&gt;&lt;/svg&gt;
&lt;/span&gt;
&lt;div class="callout-content dark:text-neutral-300"&gt;
&lt;div class="callout-title font-semibold mb-1"&gt;Note&lt;/div&gt;
&lt;div class="callout-body"&gt;&lt;p&gt;Create your slides in Markdown - click the &lt;em&gt;Slides&lt;/em&gt; button to check out the example.&lt;/p&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;Add the publication&amp;rsquo;s &lt;strong&gt;full text&lt;/strong&gt; or &lt;strong&gt;supplementary notes&lt;/strong&gt; here. You can use rich formatting such as including
.&lt;/p&gt;</description></item></channel></rss>