Current smart assistants rely almost entirely on users for planning and oversight. To be of use to a wider range of stakeholders, next-generation AI systems will have to take on some of this cognitive load so that they can function more proactively. This talk explores ethical challenges for AI systems that seek to be proactive or anticipatory. These challenges include identifying aspects of human communication that are fundamentally social and that go beyond the semantic content of what is asserted. Commissive speech acts are a key example. They also include tracking obligations, permissions, and prerogatives that are in force for an agent at a time and enabling agents to satisfy their duties and take advantage of their prerogatives. Each of these tasks sheds important light on neglected aspects of AI safety and alignment.