Difference between revisions of "Free Software Directory talk:Artificial Intelligence Team"
m (David Hedlund moved page Collection talk:AI to Free Software Directory talk:Artificial Intelligence Project Team: Initiate the team) |
(==External links== * [https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence Applications of artificial intelligence (Wikipedia)]) |
||
Line 31: | Line 31: | ||
* LLaMa is comparable to GPT-3, and has been fully released as a torrent. | * LLaMa is comparable to GPT-3, and has been fully released as a torrent. | ||
* 7B paramenter model has a VRAM requirement of 10GB. The 13B model has a requirement of 20GB, 30B needs 40GB, and 65B needs 80GB. | * 7B paramenter model has a VRAM requirement of 10GB. The 13B model has a requirement of 20GB, 30B needs 40GB, and 65B needs 80GB. | ||
+ | |||
+ | ==External links== | ||
+ | * [https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence Applications of artificial intelligence (Wikipedia)] |
Revision as of 16:14, 29 April 2023
Contents
Free software replacements that are missing
- IDE-AI: CodeGPT-like, DeepMind AlphaGode
- Voice to instrument: Tone Transfer-like
- Identification
- Photo
- Pl@ntNet for Android (https://play.google.com/store/apps/details?id=org.plantnet) - Pl@ntNet is a citizen science project for automatic plant identification through photographs and based on machine learning. "The observations shared by the community are published with the associated images under a creative common cc-by-sa license (visible author name)." - https://plantnet.org/en/2020/08/06/your-plntnet-data-integrated-into-gbif/
- Audio
- Shazam: Shazam is an application that can identify music, movies, advertising, and television shows, based on a short sample played and using the microphone on the device.
- A free app that functions like midomi.com -- "You can find songs with midomi and your own voice. Forgot the name of a song? Heard a bit of one on the radio? All you need is your computer's microphone."
- Photo
- http://design.rxnfinder.org/addictedchem/prediction/
Freedom issues
Stable Diffusion
Stable Diffusion model files (.ckpt): The training data contains non-free licensed material.
Here's the stable diffusion beginning point: https://huggingface.co/CompVis/stable-diffusion-v1-4 https://huggingface.co/spaces/CompVis/stable-diffusion-license
stable-diffusion-webui
- https://github.com/AUTOMATIC1111/stable-diffusion-webui
- (IMPORTANT) Add a license to this repository #2059 - https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2059
- Demo, and guide - https://www.youtube.com/watch?v=R52hxnpNews
- Extension script: https://github.com/deforum-art/deforum-for-automatic1111-webui
Large Language Models
As far as I'm aware, large language models rely on training data that contains non-free licensed material.
LLaMa
- LLaMa is released under the GNU General Public License v3.0: https://github.com/facebookresearch/llama/blob/main/LICENSE
- LLaMa is comparable to GPT-3, and has been fully released as a torrent.
- 7B paramenter model has a VRAM requirement of 10GB. The 13B model has a requirement of 20GB, 30B needs 40GB, and 65B needs 80GB.
External links
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the page “GNU Free Documentation License”.
The copyright and license notices on this page only apply to the text on this page. Any software or copyright-licenses or other similar notices described in this text has its own copyright notice and license, which can usually be found in the distribution or license text itself.