This toy may entertain all fans of Futurama. You will need this Will Cogley's eye mechanism: https://www.notion.so/nilheim-mechatronics/EyeMech-1-0-fee129fa32a443749f88524f53702f5a . The main computer is Raspberry Pi Zero W2. The camera is Raspberry Pi AI Camera. it has an AI processor that is used to recognize people. The position of the people is than used to move the eyes toward the closest detected person. The servos are controlled with Adafruit PWM controller: https://www.amazon.com/dp/B00EIB0U7A?ref_=ppx_hzsearch_conn_dt_b_fed_asin_title_12. Any other PCA9685-based board can be used instead.
The model is split into multiple parts to simplify the printing and reduce number of supports. they have to be glued together after printing except the internal eye holder. The internal eye holder has holes for M4 H8 nut inserts. The screws in the insets are to secure the eye mechanism by Will Cogley mentioned above. The internal eye holder is inserted into the external eye holder without glue and holds there by friction (you can shim it if the friction is not sufficient). The internal eye holder is made of transparent PLA and has to have 100% infill at the front. It has holes for 4 small 3mm-diameter LEDs (I used https://www.amazon.com/dp/B07QXR5MZB?ref_=ppx_hzsearch_conn_dt_b_fed_asin_title_12). The LEDs are connected to the same PWM controller as the servos.
When the camera does not see any people in front of Bender, with current software, the two green LEDs (left and right) are lit at half brightness. The eye lids are closed. Once an object is detected, the green lights are off and the two violet lights lit. The eyes open and teach the object (closest person). The lids blink periodically for improving the effect.
The base has screw holes to attach the head to a wooden (or any other material) plate.
Video: TBD (will provide the link later)
SW: TBD
The author marked this model as their own original creation.