Facing increasing scrutiny, Facebook recently announced its new “Blueprint for Content Governance and Enforcement,” outlining how the company wants to change its approach to moderating speech and working with regulators.

Documents released this week by U.K. lawmakers, which describe various ways Facebook has wielded customer data or contemplated using it to increase profits, have only added to mounting criticisms. The growing list includes its alleged mishandling of users’ sensitive data, its struggles to combat misinformation – particularly during the 2016 elections – and its contribution to increasing political polarization.

If Facebook is really intent on addressing the growing distrust among users, it is essential that it commits to transparency and viewpoint inclusion. The question is: does the new blueprint chart a path that can succeed in doing this?

For many of those concerned about free expression online, the answer is unclear. In fact, this new blueprint includes several potentially troubling items. Among them: using artificial intelligence (AI) more “proactively” to remove harmful content, reducing the reach of “borderline” content, and inviting regulators across the globe to develop a new regulatory framework for social media (starting with Europe).

The plan also outlines the creation of an independent governance and oversight committee, slated to kick off in 2019, that could play a massive role in setting the platform’s policies.

While details on the committee are scarce, CEO Mark Zuckerberg said it would need to balance “uphold[ing] the principle of giving people a voice while also recognizing the reality of keeping people safe.” Zuckerberg said he envisions this new committee as a “Supreme Court” for content moderation, selecting controversial content decisions to adjudicate according to published “community standards.”

Following years of controversies that are both real and imagined, conservatives are particularly distrustful of Facebook. As Facebook’s new blueprint is implemented, it should make sure the concerns of conservatives are heard.

This decision promises to shift power (and blame) away from internal company teams to an external body beholden to the interests of the broader Facebook community. Facebook now seems ready to admit that it should not be the sole arbiter of what users can and cannot express on its platform – perhaps recognizing that it hasn’t been very good at it.

While Facebook’s recognition of the limitations of centralized governance is a positive step, it must address important questions surrounding this committee. How will its membership's will be decided? How will its independence be secured? And most importantly, how will it seek out a variety of voices which accurately reflect the Facebook community’s religious, political and cultural makeup?

Twitter’s Trust and Safety Council, though imperfect, can serve as a model for Facebook to follow. Formed in 2016, the Council consists of dozens of publicly listed groups including “safety advocates, academics, and researchers; grassroots advocacy organizations that rely on Twitter to build movements; and community groups working to prevent abuse.” Each entity advises the company on its safety products and policies.

Facebook should follow Twitter’s lead by publicly listing the organizations involved; but it should also go a step further by enumerating its powers and making viewpoint inclusion central to its mission.

Twitter, although transparent about the Council’s members, does not specify the Council’s power to influence content moderation policies. In contrast, Zuckerberg has claimed that Facebook’s new committee’s decisions on content moderation will be “transparent and binding.” It is vital that he keep this promise if he wants to earn long-term confidence in the committee.

Twitter also fails to include all relevant voices in its Council, particularly its “Hateful Conduct and Harassment Partners.” While the organization claims to stand for “freedom of expression for everyone,” its partners in policing offensive speech include prominent left-leaning groups like the Anti-Defamation League and the Southern Poverty Law Center – but no conservative or right-leaning organizations.

For content moderation decisions to truly represent Facebook’s U.S. market, where more people self-identify as “conservative” than “liberal” by nine percentage points, Facebook must do better than Twitter at including conservative and other right-leaning viewpoints. Facebook’s new committee cannot define and police offensive speech based solely on Silicon Valley’s liberal bias.

Following years of controversies that are both real and imagined, conservatives are particularly distrustful of Facebook. As Facebook’s new blueprint is implemented, it should make sure the concerns of conservatives are heard. In practice, this means making sure right-leaning groups are included in the new community governance process.

Facebook should seize this opportunity to make itself welcoming for a plurality of voices, fulfilling its mission to “give people the power to build community and bring the world closer together.”