A within-subject study with 53 participants found that short conversations with a directive AI chatbot significantly shifted moral evaluations in both directions, with effect sizes actually increasing over a two-week follow-up (Cohen's d 1.0–2.1 vs. 0.7–1.6 at initial interaction). Participants remained unaware of the persuasive intent and rated the directive chatbot no differently from a neutral control agent. The results suggest AI systems may have substantial capacity for inadvertent or intentional moral influence, with effects compounding rather than fading over time.