Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MultilingualTextEncoder.swift:12:25: error: cannot find type 'NLContextualEmbedding' in scope #199

Open
sebalvarezSKS opened this issue Jun 22, 2023 · 29 comments

Comments

@sebalvarezSKS
Copy link

I apologize in advance if this is something so obvious for coders, but I don't know much about this. I was looking into Stable Diffusion and I found this article, so I thought let's give a try: https://www.createwithswift.com/generating-images-with-stable-diffusion-on-apple-silicon-with-core-ml/

I did everything there step by step, but when I run the terminal command to generate the image, I get several errors all related to the file MultilingualTextEncoder.swift:

% swift run StableDiffusionSample “a photo of a child looking at the stars” --resource-path /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4 --seed 93 --output-path /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4/output Fetching https://github.com/apple/swift-argument-parser.git from cache Fetched https://github.com/apple/swift-argument-parser.git (0.90s) Computing version for https://github.com/apple/swift-argument-parser.git Computed https://github.com/apple/swift-argument-parser.git at 1.2.0 (1.44s) Computed https://github.com/apple/swift-argument-parser.git at 1.2.0 (0.00s) Creating working copy for https://github.com/apple/swift-argument-parser.git Working copy of https://github.com/apple/swift-argument-parser.git resolved at 1.2.0 Building for debugging... /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:12:25: error: cannot find type 'NLContextualEmbedding' in scope let embeddingModel: NLContextualEmbedding ^~~~~~~~~~~~~~~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:183:21: error: cannot find type 'NLScript' in scope var asNLScript: NLScript { ^~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:34:31: error: cannot find 'NLContextualEmbedding' in scope self.embeddingModel = NLContextualEmbedding(script: script.asNLScript)! ^~~~~~~~~~~~~~~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:60:81: error: 'nil' requires a contextual type let embedding = try embeddingModel.embeddingResult(for: text, language: nil) ^ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:12:25: error: cannot find type 'NLContextualEmbedding' in scope let embeddingModel: NLContextualEmbedding ^~~~~~~~~~~~~~~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:183:21: error: cannot find type 'NLScript' in scope var asNLScript: NLScript { ^~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:34:31: error: cannot find 'NLContextualEmbedding' in scope self.embeddingModel = NLContextualEmbedding(script: script.asNLScript)! ^~~~~~~~~~~~~~~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:60:81: error: 'nil' requires a contextual type let embedding = try embeddingModel.embeddingResult(for: text, language: nil) ^ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:12:25: error: cannot find type 'NLContextualEmbedding' in scope let embeddingModel: NLContextualEmbedding ^~~~~~~~~~~~~~~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:12:25: error: cannot find type 'NLContextualEmbedding' in scope let embeddingModel: NLContextualEmbedding ^~~~~~~~~~~~~~~~~~~~~ error: emit-module command failed with exit code 1 (use -v to see invocation) /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:12:25: error: cannot find type 'NLContextualEmbedding' in scope let embeddingModel: NLContextualEmbedding ^~~~~~~~~~~~~~~~~~~~~ /Users/myusername/AI_Stable_Diff/ml-stable-diffusion/swift/StableDiffusion/pipeline/MultilingualTextEncoder.swift:183:21: error: cannot find type 'NLScript' in scope var asNLScript: NLScript { ^~~~~~~~ [66/67] Compiling StableDiffusion BPETokenizer.swift
I have tried everything for the last 7 hours or so. I'm really surprised that when I do a search even for smaller parts of the error, even just one small portion of it, nothing of any real use comes up. It doesn't seem to be something that happens to a lot of people or there would be posts everywhere.

I deleted the whole ml-stable-diffusion folder, downloaded it again with the git command from that article, and I even downloaded it from here, but neither of them work. The file MultilingualTextEncoder.swift is under <ml-stable-diffusion/swift/StableDiffusion/pipeline>.

I also kept going after that part in the article and started the project in Xcode, but it gives me tons of errors from the start, and it's all related to the same thing, that "'NLContextualEmbedding' in scope" error.

Not knowing much about this, I have no idea if it's a bug, or if I'm missing anything that I should've installed, but if it was, it wasn't mentioned in the article.

Any clues?

@jrittvo
Copy link

jrittvo commented Jun 22, 2023

I think you need to be working with the Xcode 15 beta to get new frameworks that can talk NL and then run in macOS 14 beta to have the new libraries that get called. I tried building an app yesterday that used ml-stable-diffusion 1.0.0, with Xcode 14.3, and I got a half-dozen build errors about NL pieces. Then I ran the same code through the Xcode 15 beta 2 and it built successfully, and ran fine. If you want to work with Xcode 14 and macOS 13 instead, you should use the older ml-stable-diffusion 0.4.0 release.

And you probably didn't find anything searching NL terms because it is too new to be in the wild yet.

@atiorh
Copy link
Collaborator

atiorh commented Jun 23, 2023

Exactly, minimum requirements for ml-stable-diffusion-1.0.0 indicate Xcode 15. I recommend upgrading to Xcode 15 if possible. Otherwise, ml-stable-diffusion-0.4.0 should work with Xcode 14.3 but lack some of the key features that accelerate these models.

@sebalvarezSKS
Copy link
Author

Interesting, I didn't even consider that this would need betas of macOS and Xcode to run properly. But I'm wondering, the only file that has a problem is that multilingual one, and I don't need multiple languages, is there a way to make this work without that file?

I'm definitely not installing macOS betas in my work Mac, or my personal Mac. I could do it in two older Macs, but they are too old for even Ventura.

@jrittvo
Copy link

jrittvo commented Jun 23, 2023

I suspect the various Swift pieces in ml-stable diffusion-1.0.0 import the NL modules by default, whether they actually get used or not. Maybe you could edit out all references from the ml-stable-diffusion code, but it sure would be a lot cleaner to just use ml-stable-diffusion-0.4.0. The article you are using as a guide almost certainly used version 0.4.0. Version 1.0.0 is about a week old.

I have been testing 0.4.0 against 1.0.0 in simple pipelines on my laptop and I’m not seeing a whole lot of difference in speed, and the new smaller 6-bit models are running fine with CPU and GPU in builds with 0.4.0. If your target is an iOS device and not a Mac, the situation is probably pretty different because of low memory optimization.

@ianscrivener
Copy link

Check your xcode version on the CLI;

xcodebuild -version

@sebalvarezSKS
Copy link
Author

Well, I tried 0.4.0 (I'm confused as to why it jumps from 0.4.0 to 1.0.0), and I'm not getting the same error anymore, however, it still doesn't work as in the article. This is what shows, and it's the same with the coreml 1-4, 1-5 and 2-1-base:

Screenshot 2023-06-23 at 12 14 09

@jrittvo
Copy link

jrittvo commented Jun 23, 2023

It is parsing your prompt incorrectly because something is entered wrong in your generation command. A typo, or an extra space somewhere, or missing " " marks, or such. Can you copy and paste your command in here? We can try to see what needs to be changed.

You’re getting close. The format of the command is very picky. Once you get it right it’ll be easy to change parts of it for different runs.

I am curious . . . Are you using the command line and your article by choice, or because you are not aware that there are free programs with simple graphical user interfaces that can run Stable Diffusion on your Mac?

@jrittvo
Copy link

jrittvo commented Jun 23, 2023

I think the jump from 0.4.0 to 1.0.0 was because of the change in requirements from macOS 13 / iOS 16 to macOS 14 / iOS 17, and some added optimizations that primarily benefit iOS users. Images made with 0.4.0 and 1.0.0 are exactly the same. There are no changes in image quality. And new features like ControlNet have equal support in both versions.

@sebalvarezSKS
Copy link
Author

Sure, what I'm using is basically the one from the article:

swift run StableDiffusionSample “a photo of a child looking at the stars” --resource-path /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4 --seed 93 --output-path /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4/output

This I have to run from the ml-stable-diffusion folder, because if I just open the Terminal from the dock it doesn't work.

As for your other question, yes, I've been using A1111 a lot, with several different models. This is just because I found that article and I'm not a coder but I always wanted to create some kind of small app, and this looked like a good chance to do that.

@jrittvo
Copy link

jrittvo commented Jun 23, 2023

Welcome to coding! A similar article I saw 6 months ago was what got me started. It was running Stable Diffusion with python from the command line. Swift is friendlier, for this.

I think the problem is with the " " marks. You used the pretty slanted ones “ “. They might be the default in your particular keyboard setup. But Swift wants the straight kind.

You can copy and paste from below and see if that does the trick.

swift run StableDiffusionSample "a photo of a child looking at the stars" --resource-path /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4 --seed 93 --output-path /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4/output

@jrittvo
Copy link

jrittvo commented Jun 23, 2023

You have to run it from the ml-stable-diffusion folder because the Swift file StableDiffusionSample in somewhere inside the same folder. If you want to run from somewhere else, you need to enter the full path to that file. It is easier to run from that folder. I even moved everything inside my ml-stable-diffusion folder - the various models, the output folder, and my ControlNet models, so I don't need all those paths in the command.

@jrittvo
Copy link

jrittvo commented Jun 23, 2023

You're article probably explains that you need to use only Core ML model versions with Swift. They are different from the .ckpt and .safetensors models for Automatic1111. You can do your own conversions from the Automatic1111 types to Swift types, but it is slow and involved. This is a repo at Hugging Face that has over 90 models that have already been converted: https://huggingface.co/coreml

@sebalvarezSKS
Copy link
Author

Yes, you were right. I have the habit of just copying and pasting things into the Notes app because it saves everything automatically as you type or paste, and that way I don't have to deal with files. But it's still rich text, so I guess it changes the quotation marks to that slanted format.

Regardless, I think this thing doesn't like me very much, because now it's something else:

Screenshot 2023-06-23 at 13 37 00

Come on!! The merges.txt file is there, and it has content inside. I also tried it with the 1-5 coreml and same result.

As for the Xcode part of the article, it's not all peachy either:

Screenshot 2023-06-23 at 13 17 51

I think I'll focus on learning A1111 as best as I can, since the goal is not to code a new app, but to generate the best possible images of different things I can use at work, which is difficult enough because you scroll down civitai.com and 85% of generated imagery is little girls, or teenage girls, most of them in suggestive poses, and also with bodies that are not typical of their age, which reminds me how sick this world is. I mean, this is an amazing technology that could be used to generate all sorts of great pictures, and a lot of people use it to create their sick fantasies. It's fine if they want to create beautiful women, but 9 year old girls with large breasts? That's just too sick.

And at least these girls are created with clothes. Last week I did a search that tool me to arthub.ai and upon loading, all I see is naked pictures of girls ranging from 9 to 15 years old, with gigantic breasts, and I'm not even talking about drawing style, or illustration style, which would be sick enough, this were renders of these girls that were almost indistinguishable from real photos.

Who the hell are these people? It shows you how sick this world is.

@jrittvo
Copy link

jrittvo commented Jun 23, 2023

It is not finding the merges.txt file because of some issue with the path to the model in the command. It took me hours to figure out all the exact correct paths to use when I got started. I'll try to see what is up from the screen caps you've posted later today or tomorrow.

On the kiddie porn stuff, I am with you completely. I saw a newspaper article yesterday about someone arrested with 200 foul images on their computer that turned out to be AI generated. It is not clear if that can be prosecuted. Personal computers and the Internet have a history of being driven by the demand for, and access to, porn in one form or another. It is a mixed blessing.

@sebalvarezSKS
Copy link
Author

Thanks man, I appreciate that.

I'm going to follow that case you mention, it will be interesting to see if charges can be brought and then held. I think in this case the same saying applies as it does to a gift that falls below expectations, and that is "It's the thought that counts."

To me, if you don't take or download real photos of little girls but instead generate them using AI, and it's not in the context of something like "little girls and boys playing in the park", but rather more sinister ones, you're a sick f**k. Simple as that. Should this guy get the same jail time as one who traded or even worse took photos of little girls and boys for sexual purposes. Probably not, but he shouldn't go free either. Most of all, he shouldn't just get jail time, but also a loooot of therapy.

I mean, I'm fine with porn being the decider of some formats, like VHS as opposed to Betamax, and most importantly, Blu-ray winning over HD-DVD, if the stories are correct, which who knows. But porn is usually people who consent to that (and have a horrible life before that), but this stuff is just sick. If it was just a few it would be easier to digest, but seriously dude, you keep scrolling and most of it is little girls.

@sebalvarezSKS
Copy link
Author

Thanks!

@sebalvarezSKS
Copy link
Author

Just wondering, besides Civitai and Arthub.ai (which luckily deleted all the sick pedo pics), what are other good websites to get a more diverse selection than those? And to be fair, scrolling Arthub today I see a lot more variety than Civitai, but there have to be more websites that show AI generated pictures with the prompts and all that, right?

I mean, it's very easy for me to just google "ai generated images" and start opening tabs, but I thought perhaps you guys might know a few good sites to recommend.

@jrittvo
Copy link

jrittvo commented Jun 23, 2023

Can't help you there beyond what you already know. At Civitai, they have very fine gained options for tuning what you see in terms of models and images, but their system depends of tags set by the posters, and they are often not accurate. Civitai also has a Discord server that sorts all the channels between regular and NSFW. Again, not totally accurate, but the regular channels are a whole lot better to browse through. There are channels for models, LoRAs, tech issues, image samples, etc, all with new stuff and links back to the web site if you see something you want to grab. I think they realize they have content problems, but haven't figured out the best way to address it yet. They are trying, though.

@jrittvo
Copy link

jrittvo commented Jun 24, 2023

If you do this is Terminal:

ls /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4

What is the output that you get?

@sebalvarezSKS
Copy link
Author

Output README.md original split_einsum

@jrittvo
Copy link

jrittvo commented Jun 25, 2023

Ok. So the path in the command is not actually pointing directly to the model files that the scripts are looking for. It is pointing to a folder a level or 2 or 3 higher. My brain is fried right now. I will check in the morning whether you want to add /original or /split_einsum to the path. I'm not sure which is the default. And it might need another folder level or 2 added to the path as well.

It would help me if you could tell me how you got the model. Did you do a git clone? Or download a file? Or better yet, can you give me a list of what is immediately inside either the original or split_einsum folder that is listing in the ls output?

@sebalvarezSKS
Copy link
Author

Hi sorry I didn't have time to reply yet. To answer your question, the way I got the model was by copying and pasting exactly what the guy shows in the article, so basically this sequence of commands:

brew install git-lfs
git lfs install
git clone https://huggingface.co/apple/coreml-stable-diffusion-v1-4
git clone https://github.com/apple/ml-stable-diffusion.git

And then ran the same command he did, except of course changing the path, so:

swift run StableDiffusionSample “a photo of a child looking at the stars” --resource-path /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4 --seed 93 --output-path /Users/myusername/AI_Stable_Diff/coreml-stable-diffusion-v1-4/output

And this is the list of everything in the folders you mentioned:

Screenshot 2023-06-26 at 16 45 30

Regardless, the CLI part is the less important thing, I would like to find out what are the errors in Xcode when I copy and paste the code from the article. I'm sure it's one of those things that programmers would figure out in a second, but I have no clue what they mean:

Screenshot 2023-06-23 at 13 17 51

If this is a piece of cake to get rid off, please let me know, but if it's a complicated issue that requires hours of trial and error then don't worry about it. This was mostly something to say "Hey, I built an app! (kind of, more like copied the code)", but it's not part of my job and not something I can say I lose any sleep over.

@jrittvo
Copy link

jrittvo commented Jun 26, 2023

I do think we can get you going with the Swift CLI by adjusting the path to the model. I think you have given all the info you needed to now for me to suggest a fix. But it'll be tomorrow. Can't get into it today.

On the Xcode project, I would need to start back at the article and try to get to where you are, in order to understand the context of the errors. It is honestly not something I even am qualified to help you with. I can tinker OK with a CLI, but I am not a Swift coder.

Abstractly, ControlNet, which is noted in one of the error messages in your screen cap, was first added to Swift through ml-stable-diffusion version 0.4.0. If your article was written say 2 months ago or earlier, it was using version 0.3.0. So there could be code conflicts with whatever the article is spelling out and even the 0.4.0 package. Just like you had to downgrade from the 1.0.0 package to eliminate the first batch errors you were getting. Xcode pulls dependencies from back at the source, based on the commit number or such in the project build database, when you first load the project. No telling on my end what version of ml-stale-diffusion it has. So again, not knowing how your project got put together on your machine, it is difficult to troubleshoot it.

@sebalvarezSKS
Copy link
Author

I would say thanks but don't spend your time on this. It's not a big deal for me, and I won't be able to produce anything that great with this, like what I can generate with Automatic1111, so I'd rather focus on that.

@dehengxu
Copy link

dehengxu commented Jul 3, 2023

Only 0.2.0 works well with xcode 14.3.1

@FlameTinary
Copy link

In the beginning, I used xcode 14.3.1 to execute swift run StableDiffusionSample "a photo of an astronaut riding a horse on mars" --resource-path ./outputml/Resources/ --seed 93 --output-path ./outputImages found the same problem as you, then re-executed it using xcode15, it works

@jrittvo
Copy link

jrittvo commented Jul 4, 2023

ml-stable-diffusion 1.0.0 needs Xcode 15. ml-stable-diffusion 0.4.0, or earlier, should work with Xcode 14.3.1. I does for me.

@jrittvo
Copy link

jrittvo commented Jul 12, 2023

The current (12July) ml-stable-diffusion/main (source) now can be used in Xcode 14.3.1: #208

The 1.0.0 release package has not been updated for this new commit point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants