AI meeting summary: Henrik Axel leads a special session with a workshop on the intersection of Web Three and AI. The session includes introductions from participants, including those working in software development, blockchain and AI for business processes, decentralized identity and reputation, Dow governance models, and more. Henrik discusses his work with Reality Plus to create a network state called the Republic of Reality using Balaji Srinavasan's book as inspiration. They have used GPT to automate moderation and culture within their gaming community but are looking to expand this into a Dow context with the help of workshop attendees' input on potential challenges and opportunities in using AI. The project aims to address concerns around addictive extrinsic reward mechanisms in Play-to-Earn models prevalent in Mesaverses by rethinking how we introduce gaming ethically while creating jobs for 100k people over five years. Challenges include managing hypergrowth effectively while aligning stakeholders towards their vision across different blockchains and ecosystems. The transcript discusses the use of flippers in the marketplace for NFT cards and the importance of building a meaningful community while also managing commercial aspects. The speaker explores how to moderate and regulate stable coins, as well as reduce key person risk through decentralization. They also discuss cultural science and its relevance to building successful decentralized communities, using GPT models for language processing, and the potential implications of OpenAI's training methods on logic and semantics. Steve Warframe discusses the overall structure of how humans create languages, which is fascinating. The rapid development framework involves using a big model to train a smaller one and calling the API to classify labels. The intent model accurately predicts whether users are crypto traders, fans, or casuals based on their chats. The moderation model was retrained with data from Kegel to identify toxic messages in gaming and blockchain environments. There were very few negative sentiments in the data set, but GPT Four helped generate more for analysis. The contribution model is still evolving as it's difficult to quantify good contributions, but tagging data will help improve the centralized model over time. The transcript discusses the use of AI in governance for decentralized communities, particularly in gaming communities. The focus is on automating and supporting infrastructure and culture, with examples including smart contracts, proposals, voting systems, contribution models and more. There is a suggestion that AI could be used to identify key contributors and reward them properly. The group also discusses the challenges of implementing AI in governance, but overall sees it as a valuable tool for quick wins in improving community contributions. The team discusses the potential use and limitations of AI in community management. They agree that social algorithms can be gamed, so they prefer to have human curation until they fully understand how it works. They also express concern about overfitting and model collapse if everyone starts using generated content instead of real content. The team suggests using AI for tasks such as summarizing big calls and matchmaking based on shared interests, but they acknowledge that decentralized communities may be concerned about relying on a centralized API. They also discuss the possibility of distributed models and data stewardship moving to Dows. Ultimately, they conclude that smaller models are sufficient for their purposes and emphasize the importance of keeping a copy of human-generated content to avoid overfitting. Keywords:AI, GPT, blockchain, machine learning, automation, smart contracts. Share feedback: https://airtable.com/shrTJHhCq2PHdC3fl
Outline: I have reviewed the transcript and based on the conversation, I suggest the following outline with approximate timestamps:
- Introduction (00:00 - 04:30) Participants greet each other and introduce themselves. Harrik introduces the session and the workshop. Harrik shares his agenda for the session.
- Using AI in a Dow Context (04:30 - 19:00) Harrik discusses the potential of using AI in a Dow context. Participants share their thoughts on using AI in Dow governance. Harrik proposes an exercise to map out the opportunities and challenges of using AI.
- GPT and AI Modelling (19:00 - 37:00) Harrik discusses his work with GPT and automating moderation. Participants share their experiences with GPT and AI modelling. Harrik explains the process of building an intent model to identify chats. Participants discuss the challenges of building a large language model.
- Using Analytics and Large Language Models (37:00 - 47:00) Harrik discusses the use of analytics and fine-tuned large language models. Participants share their experiences with analytics and using large language models.
- Wrap-up and Feedback (47:00 - 50:00) Harrik summarizes the session and thanks the participants. Participants provide feedback and ask for the presentation slides. Harrik shares the Google Doc and the link to the open source fine-tuned language models.
Notes: Participants are introducing themselves and their work They are discussing using large language models for decision-making processes They mention using AI or GPT as a consultant They talk about a workshop where they brainstorm opportunities and requirements They mention using a Google Doc to brainstorm with GPT-4 They discuss using a mirror board to visualize their ideas They mention the importance of human curation for the large language models They talk about notification recaps and a conference story matcher tool They mention a list of open source fine-tuned large language models They plan to share the slides and recording of the session
Action items: Follow-ups: What is the Republic of Reality? How has GPT been used to automate moderation and culture in gaming communities? How can AI be deployed in a Dow context? Action items: Load the Mirror board link into the chat for everyone to access. Send out presentation slides after the session. Follow-ups: How to scale the community without breaking the buck? How to moderate and regulate stable coins with in-game currency utility tokens? How can they talk to regulators once they become regulated? How will the culture of this decentralized community evolve and how can it be sustained if it goes through hypergrowth? Action items: Create a new contribution model Reduce key person risk through decentralization Put this into a Dow or Mesa Dow construct Use GPT for managing data sets Fine-tune few shot learning approach using data points or zero-shot modeling Build large language models or large modular language models Set up rapid development framework Follow-ups: Can you provide more information on the contribution model and how it will be implemented? How will the sentiment analysis be used to improve the community culture? What steps are being taken to prevent type two errors in the moderation model? Action items: Tag data for good contributions to retrain contribution model bimonthly Implement sentiment analysis to monitor community culture Calibrate moderation model to minimize type two errors. Follow-ups:
- Can we explore other gaming communities with a similar moral mission?
- Is this purely a research project to test out how AI could do these things or do you guys plan on monetizing this?
- Is there enough data in smaller Dow communities for the AI model to find intent or is it most useful for hypergrowth communities? Action items:
- Implement the open sourced code with the gaming outfit and work with mature Dows interested in automated governance.
- Consider cost and sourcing when deciding whether to use OpenAI or train an open source model for operationalization.
- Have a conversation with a particular Dow about piloting this sort of thinking in their community.
- Spend time considering opportunities where AI can be used in governance, map them onto the grid based on impact and ease of implementation, and curate until clear on how it works before full automation is implemented.
- Continuously adapt contribution models depending on direction and maturity of new games that are added to the community functional areas. Follow-ups: Consider the potential consequences of overreliance on AI-generated content and the need for human curation. Explore opportunities to use AI for interactive onboarding and community building, such as creating a playground corner or using matchmaking reputation tools. Investigate ways to implement AI tools in decentralized communities without relying on centralized APIs. Action items: Implement notification recaps using AI to summarize big calls and notify users when important topics are raised. Develop matchmaking reputation tools to suggest connections within the community based on actualized reputation. Consider using smaller models instead of large language models for tasks that do not require their full capacity. Keep a copy of human-generated data to avoid overfitting from generated/fabricated sets.