Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Face landmarks detection not working in Expo #8078

Open
SamuraiF0x opened this issue Nov 21, 2023 · 12 comments
Open

Face landmarks detection not working in Expo #8078

SamuraiF0x opened this issue Nov 21, 2023 · 12 comments
Assignees
Labels
comp:react-native type:bug Something isn't working type:support user support questions

Comments

@SamuraiF0x
Copy link

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow.js): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 11
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: Samsung Galaxy A72
  • TensorFlow.js installed from (npm or script link): yarn add @tensorflow/tfjs
  • TensorFlow.js version (use command below): "@tensorflow/tfjs": "^4.13.0"
  • Browser version: -
  • Tensorflow.js Converter Version: -

Describe the current behavior

  1. First I run expo prebuild
  2. Then expo run:android
  3. Upon running "expo start --dev-client --clear" an error occurs immediately.
  • When const model = FaceModel.SupportedModels.MediaPipeFaceMesh; is deleted, the error dissapears, but then createDetector() can't be used.

Describe the expected behavior
estimateFaces() should return results

Standalone code to reproduce the issue
Dependencies:

  "dependencies": {
    "@mediapipe/face_detection": "^0.4.1646425229",
    "@mediapipe/face_mesh": "^0.4.1633559619",
    "@react-native-async-storage/async-storage": "^1.19.6",
    "@tensorflow-models/face-detection": "^1.0.2",
    "@tensorflow-models/face-landmarks-detection": "^1.0.5",
    "@tensorflow/tfjs": "^4.13.0",
    "@tensorflow/tfjs-react-native": "^0.8.0",
    "expo": "~49.0.18",
    "expo-build-properties": "^0.8.3",
    "expo-camera": "~13.6.0",
    "expo-dev-client": "~2.4.12",
    "expo-font": "~11.6.0",
    "expo-gl": "^13.2.0",
    "expo-splash-screen": "~0.22.0",
    "expo-status-bar": "~1.7.1",
    "react": "18.2.0",
    "react-native": "0.72.7",
    "react-native-canvas": "^0.1.39",
    "react-native-fs": "^2.20.0",
    "react-native-gesture-handler": "~2.13.4",
    "react-native-reanimated": "^3.5.4",
    "react-native-safe-area-context": "4.7.4",
    "react-native-webview": "^13.6.2",
  },

App.tsx

import React, { useEffect, useState } from "react";
import { StyleSheet, Text, View } from "react-native";
import * as FaceModel from "@tensorflow-models/face-landmarks-detection";
import * as tf from "@tensorflow/tfjs";
import { cameraWithTensors } from "@tensorflow/tfjs-react-native";
import { Camera, CameraType } from "expo-camera";
import { StatusBar } from "expo-status-bar";

export default function App() {
  const [modelReady, setModelReady] = useState(null);
  const TensorCamera = cameraWithTensors(Camera);

  useEffect(() => {
  	(async () => {
  		await tf.ready();

  		const model = FaceModel.SupportedModels.MediaPipeFaceMesh;
  		const detectorConfig = {
  			runtime: "mediapipe",
  			maxFaces: 1,
  			refineLandmarks: true,
  			solutionPath: "https://cdn.jsdelivr.net/npm/@mediapipe/face_mesh",
  		};

  		setModelReady(await FaceModel.createDetector(model, detectorConfig));

  		console.log(tf.getBackend());
  		console.log("READY!!!");
  	})();
  }, []);

  const handleImage = (images) => {
  	const loop = async () => {
  		const nextImageTensor = images.next().value;
  		const result = await modelReady.estimateFaces({ input: nextImageTensor });
  		console.log(result);
  		tf.dispose([nextImageTensor]);

  		requestAnimationFrame(loop);
  	};
  	loop();
  };

  return (
  	<View style={styles.container}>
  		<StatusBar style="auto" />
  		{modelReady && (
  			<TensorCamera
  				style={{ height: 200, width: 200 }}
  				type={CameraType.front}
  				onReady={handleImage}
  				cameraTextureHeight={1080}
  				cameraTextureWidth={1920}
  				resizeHeight={200}
  				resizeWidth={152}
  				resizeDepth={3}
  				autorender={true}
  				useCustomShadersToResize={false}
  			/>
  		)}
  	</View>
  );
}

const styles = StyleSheet.create({
  container: {
  	flex: 1,
  	backgroundColor: "#fff",
  	alignItems: "center",
  	justifyContent: "center",
  },
});

Logs

 ERROR  TypeError: Cannot read property 'includes' of undefined, js engine: hermes
 ERROR  Invariant Violation: "main" has not been registered. This can happen if:
* Metro (the local dev server) is run from the wrong folder. Check if Metro is running, stop it and restart it in the current project.
* A module failed to load due to an error and `AppRegistry.registerComponent` wasn't called., js engine: hermes
  • When const model = FaceModel.SupportedModels.MediaPipeFaceMesh; is deleted:
    console.log(tf.getBackend()); results in => LOG rn-webgl
@SamuraiF0x SamuraiF0x added the type:bug Something isn't working label Nov 21, 2023
@google-ml-butler google-ml-butler bot added the type:support user support questions label Nov 21, 2023
@SamuraiF0x SamuraiF0x changed the title Face landmarks detection not working on Expo Face landmarks detection not working in Expo Nov 21, 2023
@gaikwadrahul8 gaikwadrahul8 self-assigned this Nov 22, 2023
@gaikwadrahul8
Copy link
Contributor

Hi, @SamuraiF0x

I apologize for the delayed response, if possible could you please help me with your Github repo with detailed steps which you followed to replicate the same behavior from our end which will help us to investigate this issue further ?

Thank you for your understanding and patience.

@SamuraiF0x
Copy link
Author

SamuraiF0x commented Nov 24, 2023

Hi @gaikwadrahul8,

Thanks for the reply! Here's my repo:
https://github.com/SamuraiF0x/expo-faceMesh

Steps I took:

  1. installed all dependencies with yarn
  2. yarn cache clean
  3. expo prebuild
  4. expo run:android
  5. eas build --profile development --platform android
  6. download and install on phone
  7. expo start --dev-client --clear
  8. open app

@gaikwadrahul8
Copy link
Contributor

Hi, @SamuraiF0x

Thank you for sharing your repo and steps to replicate the same behavior from our end and I cloned your repo and when I tried expo prebuild I'm getting below error message so to solve that error I tried npm install -g sharp-cli && npx expo-optimize || true command so I got output All assets were fully optimized already. so to confirm, Am I doing something wrong from my end, if so could you please guide me to replicate same behavior from my end ? Thank you.

Here is error log after expo prebuild command for reference :

(base) gaikwadrahul-macbookpro:expo-faceMesh gaikwadrahul$ expo prebuild
✔ Created native projects | /android, /ios already created | gitignore skipped
› Using current versions instead of recommended expo-splash-screen@~0.20.5, react-native@0.72.6.
✔ Updated package.json and added index.js entry point for iOS and Android
› Installing using yarn
> yarn install
» android: userInterfaceStyle: Install expo-system-ui in your project to enable this feature.
✖ Config sync failed
Error: [ios.dangerous]: withIosDangerousBaseMod: Supplied image is not a supported image type: ./assets/logos/tamagui.svg
Error: [ios.dangerous]: withIosDangerousBaseMod: Supplied image is not a supported image type: ./assets/logos/tamagui.svg
    at ensureImageOptionsAsync (/usr/local/lib/node_modules/expo/node_modules/@expo/image-utils/build/Image.js:138:15)
    at async generateImageAsync (/usr/local/lib/node_modules/expo/node_modules/@expo/image-utils/build/Image.js:146:18)
    at async generateUniversalIconAsync (/usr/local/lib/node_modules/expo/node_modules/@expo/prebuild-config/build/plugins/icons/withIosIcons.js:115:7)
    at async setIconsAsync (/usr/local/lib/node_modules/expo/node_modules/@expo/prebuild-config/build/plugins/icons/withIosIcons.js:78:22)
    at async /usr/local/lib/node_modules/expo/node_modules/@expo/prebuild-config/build/plugins/icons/withIosIcons.js:54:5
    at async action (/usr/local/lib/node_modules/expo/node_modules/@expo/config-plugins/build/plugins/withMod.js:201:23)
    at async interceptingMod (/usr/local/lib/node_modules/expo/node_modules/@expo/config-plugins/build/plugins/withMod.js:105:21)
    at async action (/usr/local/lib/node_modules/expo/node_modules/@expo/config-plugins/build/plugins/createBaseMod.js:61:21)
    at async interceptingMod (/usr/local/lib/node_modules/expo/node_modules/@expo/config-plugins/build/plugins/withMod.js:105:21)
    at async evalModsAsync (/usr/local/lib/node_modules/expo/node_modules/@expo/config-plugins/build/plugins/mod-compiler.js:202:25)
(base) gaikwadrahul-macbookpro:expo-faceMesh gaikwadrahul$ 

@SamuraiF0x
Copy link
Author

SamuraiF0x commented Nov 24, 2023

Hi @gaikwadrahul8 ,

I changed "icon": "./assets/logos/tamagui.svg" => "icon": "./assets/icon.png" in app.json
(you can pull from main)

I think this should fix the error

@SamuraiF0x
Copy link
Author

Hi @gaikwadrahul8, did you manage to resolve the error and prebuild it?

@gaikwadrahul8
Copy link
Contributor

Hi, @SamuraiF0x

Yes, command expo prebuild working fine but I'm getting error with expo run:android command as below it seems like that error is coming due to compatibility issue with the Java Development Kit (JDK) version being used in the project if I'm not wrong so May I know which JDK version are you using for this project ? Thank you for your understanding and patience.

I am using below version of Java :

(base) gaikwadrahul-macbookpro:~ gaikwadrahul$ java --version
java 21.0.1 2023-10-17 LTS
Java(TM) SE Runtime Environment (build 21.0.1+12-LTS-29)
Java HotSpot(TM) 64-Bit Server VM (build 21.0.1+12-LTS-29, mixed mode, sharing)
(base) gaikwadrahul-macbookpro:~ gaikwadrahul$ javac --version
javac 21.0.1
(base) gaikwadrahul-macbookpro:~ gaikwadrahul$ 

Here is error log after running command expo run:android :

(base) gaikwadrahul-macbookpro:expo-faceMesh gaikwadrahul$ expo run:android
› Building app...
Configuration on demand is an incubating feature.

FAILURE: Build failed with an exception.

* What went wrong:
Could not open settings generic class cache for settings file '/Users/gaikwadrahul/Desktop/TFJS/test-8078/expo-faceMesh/android/settings.gradle' (/Users/gaikwadrahul/.gradle/caches/8.0.1/scripts/ea7942qvucrtweejmnostmcjh).
> BUG! exception in phase 'semantic analysis' in source unit '_BuildScript_' Unsupported class file major version 65

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 376ms
Error: /Users/gaikwadrahul/Desktop/TFJS/test-8078/expo-faceMesh/android/gradlew exited with non-zero code: 1
Error: /Users/gaikwadrahul/Desktop/TFJS/test-8078/expo-faceMesh/android/gradlew exited with non-zero code: 1
    at ChildProcess.completionListener (/usr/local/lib/node_modules/expo/node_modules/@expo/spawn-async/build/spawnAsync.js:52:23)
    at Object.onceWrapper (node:events:629:26)
    at ChildProcess.emit (node:events:514:28)
    at maybeClose (node:internal/child_process:1091:16)
    at ChildProcess._handle.onexit (node:internal/child_process:302:5)
    ...
    at Object.spawnAsync [as default] (/usr/local/lib/node_modules/expo/node_modules/@expo/spawn-async/build/spawnAsync.js:17:21)
    at spawnGradleAsync (/usr/local/lib/node_modules/expo/node_modules/@expo/cli/build/src/start/platforms/android/gradle.js:72:46)
    at Object.assembleAsync (/usr/local/lib/node_modules/expo/node_modules/@expo/cli/build/src/start/platforms/android/gradle.js:52:18)
    at runAndroidAsync (/usr/local/lib/node_modules/expo/node_modules/@expo/cli/build/src/run/android/runAndroidAsync.js:35:24)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
(base) gaikwadrahul-macbookpro:expo-faceMesh gaikwadrahul$ 

@SamuraiF0x
Copy link
Author

Hi @gaikwadrahul8,

No worries, here's my java version:

java --version
openjdk 17.0.9 2023-10-17
OpenJDK Runtime Environment Temurin-17.0.9+9 (build 17.0.9+9)
OpenJDK 64-Bit Server VM Temurin-17.0.9+9 (build 17.0.9+9, mixed mode, sharing)
javac --version
javac 17.0.9

@KromoS1
Copy link

KromoS1 commented Dec 13, 2023

Hi, I'm getting the exact same error

TypeError: Cannot read property 'includes' of undefined, js engine: hermes
 ERROR  Invariant Violation: "main" has not been registered. This can happen if:
* Metro (the local dev server) is run from the wrong folder. Check if Metro is running, stop it and restart it in the current project.
* A module failed to load due to an error and `AppRegistry.registerComponent` wasn't called., js engine: hermes

I noticed that I get this error when using imort, regardless of whether I use objects from the library. If you comment it out, the error goes away.

these are the dependencies I'm using

mediapipe/face_mesh": "^0.4.1633559619",
"@tensorflow-models/face-detection": "^1.0.2",
"@tensorflow-models/face-landmarks-detection": "^1.0.5",
"@tensorflow/tfjs": "^4.15.0",
"@tensorflow/tfjs-react-native": "^1.0.0",
"expo": "~49.0.15",
"react": "18.2.0",
"react-native": "0.72.6",

I'm using dev-client build from expo on a physical device readme note 11, here are the steps to install the build:

eas build --profile base --platform android
bun expo run:android

Here's an example of a component

import * as face_mesh from '@tensorflow-models/face-landmarks-detection'
import '@tensorflow/tfjs-react-native'

import { Camera } from 'expo-camera'
import React from 'react'
import { StyleSheet, useWindowDimensions, View } from 'react-native'

import { CustomTensorCamera } from './CustomTensorCamera'
import { LoadingView } from './LoadingView'
import { PredictionList } from './PredictionList'

export function ModelView() {
  const load = async () => {
    await tf.ready()
    await face_mesh.load(face_mesh.SupportedPackages.mediapipeFacemesh)
  }

  load()

  const [predictions, setPredictions] = React.useState([])

  if (!false) {
    return <LoadingView message="Loading TensorFlow model" />
  }

  return (
    <View style={{ flex: 1, backgroundColor: 'black', justifyContent: 'center' }}>
      <PredictionList predictions={predictions} />
      <View style={{ borderRadius: 20, overflow: 'hidden' }}>
        <ModelCamera model={model} setPredictions={setPredictions} />
      </View>
    </View>
  )
}

function ModelCamera({ model, setPredictions }) {
  const raf = React.useRef(null)
  const size = useWindowDimensions()

  React.useEffect(() => {
    return () => {
      cancelAnimationFrame(raf.current)
    }
  }, [])

  const onReady = React.useCallback(
    images => {
      const loop = async () => {
        const nextImageTensor = images.next().value
        const predictions = await model.estimateFaces(nextImageTensor)
        console.log(JSON.stringify(predictions, null, 2))
        setPredictions(predictions)
        raf.current = requestAnimationFrame(loop)
      }
      loop()
    },
    [setPredictions]
  )

  return React.useMemo(
    () => (
      <CustomTensorCamera
        width={size.width}
        style={styles.camera}
        type={Camera.Constants.Type.front}
        onReady={onReady}
        autorender
      />
    ),
    [onReady, size.width]
  )
}

package.json full file ->
package.zip

@KromoS1
Copy link

KromoS1 commented Dec 13, 2023

I've also identified such a case. I created an empty project with such dependencies, and found that the @mediapipe/face_mesh library gives an error. When changing the engine type to jsc, I got a clear error TypeError: undefined is not an object (evaluating 'navigator.userAgent.includes')
The conclusion is that when using this library there is no access to the navigator object, which is present in the browser.

dependencies": {
    "@mediapipe/face_mesh": "^0.4.1633559619",
    "@react-native-async-storage/async-storage": "^1.21.0",
    "@tensorflow-models/face-detection": "^1.0.2",
    "@tensorflow-models/face-landmarks-detection": "^1.0.5",
    "@tensorflow/tfjs": "^4.15.0",
    "@tensorflow/tfjs-core": "^4.15.0",
    "@tensorflow/tfjs-react-native": "^1.0.0",
    "expo": "~49.0.15",
    "expo-camera": "^13.6.0",
    "expo-gl": "^13.2.0",
    "expo-status-bar": "~1.6.0",
    "react": "18.2.0",
    "react-native": "0.72.6",
    "react-native-fs": "^2.20.0"
  },
  "devDependencies": {
    "@babel/core": "^7.20.0",
    "@types/react": "~18.2.14",
    "typescript": "^5.1.3"
  },

@SamuraiF0x
Copy link
Author

@KromoS1 @gaikwadrahul8 have you perhaps found a solution?

@KromoS1
Copy link

KromoS1 commented Jan 9, 2024

@gaikwadrahul8 Yes, but not using React-Native, I created a separate back end application where I implemented a socket that receives data from the camera in base64 format, then translated it into a Buffer, then into a tensor object and then was able to use the model to determine the points I wanted to use.

"@tensorflow-models/face-detection": "^1.0.2",
"@tensorflow-models/face-landmarks-detection": "^1.0.5",
"@tensorflow/tfjs-backend-webgl": "^4.15.0",
"@tensorflow/tfjs-core": "^4.15.0",
"@tensorflow/tfjs-node": "^4.15.0",
"@tensorflow/tfjs-node-gpu": "^4.15.0",
"@mediapipe/face_mesh": "^0.4.1633559619",

To work, I used examples from this repository

specifically this example and updated the code for backend use.

@SamuraiF0x
Copy link
Author

Hi @gaikwadrahul8, did you have time in the meantime to check this out?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:react-native type:bug Something isn't working type:support user support questions
Projects
None yet
Development

No branches or pull requests

3 participants