-
Notifications
You must be signed in to change notification settings - Fork 942
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MultilingualTextEncoder.swift:12:25: error: cannot find type 'NLContextualEmbedding' in scope #199
Comments
I think you need to be working with the Xcode 15 beta to get new frameworks that can talk NL and then run in macOS 14 beta to have the new libraries that get called. I tried building an app yesterday that used ml-stable-diffusion 1.0.0, with Xcode 14.3, and I got a half-dozen build errors about NL pieces. Then I ran the same code through the Xcode 15 beta 2 and it built successfully, and ran fine. If you want to work with Xcode 14 and macOS 13 instead, you should use the older ml-stable-diffusion 0.4.0 release. And you probably didn't find anything searching NL terms because it is too new to be in the wild yet. |
Exactly, minimum requirements for ml-stable-diffusion-1.0.0 indicate Xcode 15. I recommend upgrading to Xcode 15 if possible. Otherwise, ml-stable-diffusion-0.4.0 should work with Xcode 14.3 but lack some of the key features that accelerate these models. |
Interesting, I didn't even consider that this would need betas of macOS and Xcode to run properly. But I'm wondering, the only file that has a problem is that multilingual one, and I don't need multiple languages, is there a way to make this work without that file? I'm definitely not installing macOS betas in my work Mac, or my personal Mac. I could do it in two older Macs, but they are too old for even Ventura. |
I suspect the various Swift pieces in ml-stable diffusion-1.0.0 import the NL modules by default, whether they actually get used or not. Maybe you could edit out all references from the ml-stable-diffusion code, but it sure would be a lot cleaner to just use ml-stable-diffusion-0.4.0. The article you are using as a guide almost certainly used version 0.4.0. Version 1.0.0 is about a week old. I have been testing 0.4.0 against 1.0.0 in simple pipelines on my laptop and I’m not seeing a whole lot of difference in speed, and the new smaller 6-bit models are running fine with CPU and GPU in builds with 0.4.0. If your target is an iOS device and not a Mac, the situation is probably pretty different because of low memory optimization. |
Check your xcode version on the CLI;
|
It is parsing your prompt incorrectly because something is entered wrong in your generation command. A typo, or an extra space somewhere, or missing " " marks, or such. Can you copy and paste your command in here? We can try to see what needs to be changed. You’re getting close. The format of the command is very picky. Once you get it right it’ll be easy to change parts of it for different runs. I am curious . . . Are you using the command line and your article by choice, or because you are not aware that there are free programs with simple graphical user interfaces that can run Stable Diffusion on your Mac? |
I think the jump from 0.4.0 to 1.0.0 was because of the change in requirements from macOS 13 / iOS 16 to macOS 14 / iOS 17, and some added optimizations that primarily benefit iOS users. Images made with 0.4.0 and 1.0.0 are exactly the same. There are no changes in image quality. And new features like ControlNet have equal support in both versions. |
Sure, what I'm using is basically the one from the article:
This I have to run from the ml-stable-diffusion folder, because if I just open the Terminal from the dock it doesn't work. As for your other question, yes, I've been using A1111 a lot, with several different models. This is just because I found that article and I'm not a coder but I always wanted to create some kind of small app, and this looked like a good chance to do that. |
Welcome to coding! A similar article I saw 6 months ago was what got me started. It was running Stable Diffusion with python from the command line. Swift is friendlier, for this. I think the problem is with the " " marks. You used the pretty slanted ones “ “. They might be the default in your particular keyboard setup. But Swift wants the straight kind. You can copy and paste from below and see if that does the trick. swift run StableDiffusionSample "a photo of a child looking at the stars" --resource-path /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4 --seed 93 --output-path /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4/output |
You have to run it from the ml-stable-diffusion folder because the Swift file |
You're article probably explains that you need to use only Core ML model versions with Swift. They are different from the .ckpt and .safetensors models for Automatic1111. You can do your own conversions from the Automatic1111 types to Swift types, but it is slow and involved. This is a repo at Hugging Face that has over 90 models that have already been converted: https://huggingface.co/coreml |
It is not finding the merges.txt file because of some issue with the path to the model in the command. It took me hours to figure out all the exact correct paths to use when I got started. I'll try to see what is up from the screen caps you've posted later today or tomorrow. On the kiddie porn stuff, I am with you completely. I saw a newspaper article yesterday about someone arrested with 200 foul images on their computer that turned out to be AI generated. It is not clear if that can be prosecuted. Personal computers and the Internet have a history of being driven by the demand for, and access to, porn in one form or another. It is a mixed blessing. |
Thanks man, I appreciate that. I'm going to follow that case you mention, it will be interesting to see if charges can be brought and then held. I think in this case the same saying applies as it does to a gift that falls below expectations, and that is "It's the thought that counts." To me, if you don't take or download real photos of little girls but instead generate them using AI, and it's not in the context of something like "little girls and boys playing in the park", but rather more sinister ones, you're a sick f**k. Simple as that. Should this guy get the same jail time as one who traded or even worse took photos of little girls and boys for sexual purposes. Probably not, but he shouldn't go free either. Most of all, he shouldn't just get jail time, but also a loooot of therapy. I mean, I'm fine with porn being the decider of some formats, like VHS as opposed to Betamax, and most importantly, Blu-ray winning over HD-DVD, if the stories are correct, which who knows. But porn is usually people who consent to that (and have a horrible life before that), but this stuff is just sick. If it was just a few it would be easier to digest, but seriously dude, you keep scrolling and most of it is little girls. |
Thanks! |
Just wondering, besides Civitai and Arthub.ai (which luckily deleted all the sick pedo pics), what are other good websites to get a more diverse selection than those? And to be fair, scrolling Arthub today I see a lot more variety than Civitai, but there have to be more websites that show AI generated pictures with the prompts and all that, right? I mean, it's very easy for me to just google "ai generated images" and start opening tabs, but I thought perhaps you guys might know a few good sites to recommend. |
Can't help you there beyond what you already know. At Civitai, they have very fine gained options for tuning what you see in terms of models and images, but their system depends of tags set by the posters, and they are often not accurate. Civitai also has a Discord server that sorts all the channels between regular and NSFW. Again, not totally accurate, but the regular channels are a whole lot better to browse through. There are channels for models, LoRAs, tech issues, image samples, etc, all with new stuff and links back to the web site if you see something you want to grab. I think they realize they have content problems, but haven't figured out the best way to address it yet. They are trying, though. |
If you do this is Terminal:
What is the output that you get? |
Output README.md original split_einsum |
Ok. So the path in the command is not actually pointing directly to the model files that the scripts are looking for. It is pointing to a folder a level or 2 or 3 higher. My brain is fried right now. I will check in the morning whether you want to add /original or /split_einsum to the path. I'm not sure which is the default. And it might need another folder level or 2 added to the path as well. It would help me if you could tell me how you got the model. Did you do a git clone? Or download a file? Or better yet, can you give me a list of what is immediately inside either the original or split_einsum folder that is listing in the ls output? |
I do think we can get you going with the Swift CLI by adjusting the path to the model. I think you have given all the info you needed to now for me to suggest a fix. But it'll be tomorrow. Can't get into it today. On the Xcode project, I would need to start back at the article and try to get to where you are, in order to understand the context of the errors. It is honestly not something I even am qualified to help you with. I can tinker OK with a CLI, but I am not a Swift coder. Abstractly, ControlNet, which is noted in one of the error messages in your screen cap, was first added to Swift through ml-stable-diffusion version 0.4.0. If your article was written say 2 months ago or earlier, it was using version 0.3.0. So there could be code conflicts with whatever the article is spelling out and even the 0.4.0 package. Just like you had to downgrade from the 1.0.0 package to eliminate the first batch errors you were getting. Xcode pulls dependencies from back at the source, based on the commit number or such in the project build database, when you first load the project. No telling on my end what version of ml-stale-diffusion it has. So again, not knowing how your project got put together on your machine, it is difficult to troubleshoot it. |
I would say thanks but don't spend your time on this. It's not a big deal for me, and I won't be able to produce anything that great with this, like what I can generate with Automatic1111, so I'd rather focus on that. |
Only 0.2.0 works well with xcode 14.3.1 |
In the beginning, I used xcode 14.3.1 to execute |
ml-stable-diffusion 1.0.0 needs Xcode 15. ml-stable-diffusion 0.4.0, or earlier, should work with Xcode 14.3.1. I does for me. |
The current (12July) ml-stable-diffusion/main (source) now can be used in Xcode 14.3.1: #208 The 1.0.0 release package has not been updated for this new commit point. |
I apologize in advance if this is something so obvious for coders, but I don't know much about this. I was looking into Stable Diffusion and I found this article, so I thought let's give a try: https://www.createwithswift.com/generating-images-with-stable-diffusion-on-apple-silicon-with-core-ml/
I did everything there step by step, but when I run the terminal command to generate the image, I get several errors all related to the file MultilingualTextEncoder.swift:
% swift run StableDiffusionSample “a photo of a child looking at the stars” --resource-path /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4 --seed 93 --output-path /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4/output Fetching https://github.com/apple/swift-argument-parser.git from cache Fetched https://github.com/apple/swift-argument-parser.git (0.90s) Computing version for https://github.com/apple/swift-argument-parser.git Computed https://github.com/apple/swift-argument-parser.git at 1.2.0 (1.44s) Computed https://github.com/apple/swift-argument-parser.git at 1.2.0 (0.00s) Creating working copy for https://github.com/apple/swift-argument-parser.git Working copy of https://github.com/apple/swift-argument-parser.git resolved at 1.2.0 Building for debugging... /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:12:25: error: cannot find type 'NLContextualEmbedding' in scope let embeddingModel: NLContextualEmbedding ^~~~~~~~~~~~~~~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:183:21: error: cannot find type 'NLScript' in scope var asNLScript: NLScript { ^~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:34:31: error: cannot find 'NLContextualEmbedding' in scope self.embeddingModel = NLContextualEmbedding(script: script.asNLScript)! ^~~~~~~~~~~~~~~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:60:81: error: 'nil' requires a contextual type let embedding = try embeddingModel.embeddingResult(for: text, language: nil) ^ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:12:25: error: cannot find type 'NLContextualEmbedding' in scope let embeddingModel: NLContextualEmbedding ^~~~~~~~~~~~~~~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:183:21: error: cannot find type 'NLScript' in scope var asNLScript: NLScript { ^~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:34:31: error: cannot find 'NLContextualEmbedding' in scope self.embeddingModel = NLContextualEmbedding(script: script.asNLScript)! ^~~~~~~~~~~~~~~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:60:81: error: 'nil' requires a contextual type let embedding = try embeddingModel.embeddingResult(for: text, language: nil) ^ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:12:25: error: cannot find type 'NLContextualEmbedding' in scope let embeddingModel: NLContextualEmbedding ^~~~~~~~~~~~~~~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:12:25: error: cannot find type 'NLContextualEmbedding' in scope let embeddingModel: NLContextualEmbedding ^~~~~~~~~~~~~~~~~~~~~ error: emit-module command failed with exit code 1 (use -v to see invocation) /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:12:25: error: cannot find type 'NLContextualEmbedding' in scope let embeddingModel: NLContextualEmbedding ^~~~~~~~~~~~~~~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:183:21: error: cannot find type 'NLScript' in scope var asNLScript: NLScript { ^~~~~~~~ [66/67] Compiling StableDiffusion BPETokenizer.swift
I have tried everything for the last 7 hours or so. I'm really surprised that when I do a search even for smaller parts of the error, even just one small portion of it, nothing of any real use comes up. It doesn't seem to be something that happens to a lot of people or there would be posts everywhere.
I deleted the whole ml-stable-diffusion folder, downloaded it again with the git command from that article, and I even downloaded it from here, but neither of them work. The file MultilingualTextEncoder.swift is under <ml-stable-diffusion/swift/StableDiffusion/pipeline>.
I also kept going after that part in the article and started the project in Xcode, but it gives me tons of errors from the start, and it's all related to the same thing, that "'NLContextualEmbedding' in scope" error.
Not knowing much about this, I have no idea if it's a bug, or if I'm missing anything that I should've installed, but if it was, it wasn't mentioned in the article.
Any clues?
The text was updated successfully, but these errors were encountered: