LLM Unicode Prompt Injection

Be careful copying AI prompts… It has become common place on social media to see posts sharing “super prompts” or prompt templates. Researchers have discovered a technique that uses unicode to hide prompt injection as non-printable characters1. Prompt injection, a term coined by Simon Willison, is a type of attack that attempts to override a user or application prompt to either alter the results or to exfiltrate earlier elements of the prompt or used in retrieval augmented generation (RAG). It is a real challenge for LLM apps at the moment as there are no completely reliable mitigation techniques. ...

January 17, 2024 · 1 min · Jered Sutton

LLMs Poison and Trust

A fascinating new paper by the Anthropic team explores how LLMs can be ’trained’ to appear normal during training, only to manifest malicious behavior once deployed1. Andrej Karpathy expanded on this idea, hypothesizing that this initial training could be provided by publishing malicious text on the internet where it would be picked up for use in training new models1. This might not seem significant, as LLMs merely generate text. However, consider the capabilities of Open Interpreter2. Open Interpreter is a program that helps you run code generated by LLMs. With Open Interpreter you can: ...

January 14, 2024 · 2 min · Jered Sutton

Response to Rabbit R1

Three days ago Rabbit announced the Rabbit R1, a new handheld AI device co-designed by Teenage Engineering. The reactions have been polarized, with some seeing it as merely an app in physical form, while others hail it as a revolution in how we interact with machines. The R1’s affordability and Teenage Engineering’s design are certainly appealing. However, I’m curious about Rabbit’s business model since the R1 doesn’t run inference locally and doesn’t require a subscription. I wonder what their revenue strategy is and what the implications for user privacy are. That being said, the privacy claims on the website seem really solid. ...

January 13, 2024 · 2 min · Jered Sutton

Dockercraft

I woke up this morning not feeling like I was enough of a hipster admin so I decided to write this keyword..er.. educational post about docker. I am going to spare you the explanation of what docker is and what it does. That you can find from other places. So lets say you have docker installed somewhere and you want to try running your own minecraft server. You could do something like this: ...

March 3, 2015 · 2 min · Jered Sutton

When strace isn't enough Part 1

An important tool in any linux admin’s toolkit is the venerable strace command. It enables us to get insight into what a program is actually doing. As awesome as strace can be, it doesn’t tell us everything. This series of articles will get you familiar with some of the other commands and approaches to gain insight into program execution. ...

November 4, 2013 · 3 min · jsuttonlogin

Back to Basics: 5 tips for the IT worker in all of us

As I have worked in the IT industry, I have come across a few ideas that seem to appear over and over. These ideas are fundamentally simple, and are a good think to think of regardless of what level you work in IT. Take some time to meditate on these and become an IT superhero. ...

October 27, 2013 · 3 min · jsuttonlogin

Quick Tip: View view linux process limits

I have on several occasions needed to troubleshoot issues which wound up being problems with linux limiting the number of open files for a given process. This can be an annoying issue to troubleshoot since many programs do not gracefully handle this condition and linux does not provide log information which indicates the situation by default. This really applies to all of the linux process limits, not just open files. ...

July 14, 2013 · 2 min · jsuttonlogin

Bash Nagios plugin

Today lets have a look at one way to construct a nagios plugin in bash. I would usually write these in perl, but sometimes that is not possible. This plugin is actually written to be executed using NRPE. <code>#!/bin/bash # bash nagios plugin ### # Variables ### OK=0 WARNING=1 CRITICAL=2 UNKNOWN=-1 TO_RETURN=${OK} TO_OUTPUT='' # Print usage information and exit print_usage(){ echo -e "\n" \ "usage: ./check_uptime -w 20 -c 30 \n" \ "\n" \ "-w <days> warning value\n" \ "-c <days> critical value\n" \ "-h this help\n" \ "\n" && exit 1 } ### # Options ### # Loop through $@ to find flags while getopts ":hw:c:" FLAG; do case "${FLAG}" in w) # Warning value WARNING_VALUE="${OPTARG}" ;; c) # Critical value CRITICAL_VALUE="${OPTARG}" ;; h) # Print usage information HELP=1;; [:?]) # Print usage information print_usage;; esac done ### # Functions ### log_date(){ echo $(date +"%b %e %T") } error() { NOW=$(log_date) echo "${NOW}: ERROR: $1" exit 1 } warning() { NOW=$(log_date) echo "${NOW}: WARNING: $1" } info() { NOW=$(log_date) echo "${NOW}: INFO: $1" } # Do something get_cmd_output(){ #generate output echo `uptime | sed 's/.*up \([0-9]*\) day.*/\1/'` || error "failed to run command" } ### # Program execution ### [ "${HELP}" ] && print_usage if [ ${WARNING_VALUE} ] && [ ${CRITICAL_VALUE} ] then CMD_OUTPUT=$(get_cmd_output) else print_usage fi if [ "${CMD_OUTPUT}" ] && [ ${CMD_OUTPUT} -gt ${CRITICAL_VALUE} ] then TO_RETURN=${CRITICAL} elif [ "${CMD_OUTPUT}" ] && [ ${CMD_OUTPUT} -gt ${WARNING_VALUE} ] then TO_RETURN=${WARNING} elif [ "${CMD_OUTPUT}" ] && [ ${CMD_OUTPUT} -gt 0 ] then TO_RETURN=${OK} else TO_RETURN=${UNKNOWN} fi if [ $TO_RETURN == ${CRITICAL} ] then TO_OUTPUT="CRITICAL " elif [ $TO_RETURN == ${WARNING} ] then TO_OUTPUT="WARNING " elif [ ${TO_RETURN} == ${OK} ] then TO_OUTPUT="OK " else TO_OUTPUT="UNKNOWN " fi TO_OUTPUT="${TO_OUTPUT}| uptime=${CMD_OUTPUT};$WARNING_VALUE;$CRITICAL_VALUE" echo "$TO_OUTPUT"; exit $TO_RETURN; </code> Lets break it down… ...

April 29, 2013 · 3 min · jsuttonlogin

Bash Parallel Execution

If you have ever wanted an easy way to execute multiple jobs in parallel in bash, then this is the snippet for you. This was originally posted on Stack Exchange. It has been modified a bit. <code>#!/bin/bash #how many jobs to run at one time JOBS_AT_ONCE=20 # The bgxupdate and bgxlimit functions below allow for # running X jobs in parallel in bash. They are taken from: # http://stackoverflow.com/questions/1537956/bash-limit-the-number-of-concurrent-jobs/1685440#1685440 # bgxupdate - update active processes in a group. # Works by transferring each process to new group # if it is still active. # in: bgxgrp - current group of processes. # out: bgxgrp - new group of processes. # out: bgxcount - number of processes in new group. bgxupdate() { bgxoldgrp=${bgxgrp} bgxgrp="" ((bgxcount = 0)) bgxjobs=" $(jobs -pr | tr '\n' ' ')" for bgxpid in ${bgxoldgrp} ; do echo "${bgxjobs}" | grep " ${bgxpid} " >/dev/null 2>&1 if [[ $? -eq 0 ]] ; then bgxgrp="${bgxgrp} ${bgxpid}" ((bgxcount = bgxcount + 1)) fi done } # bgxlimit - start a sub-process with a limit. # Loops, calling bgxupdate until there is a free # slot to run another sub-process. Then runs it # an updates the process group. # in: $1 - the limit on processes. # in: $2+ - the command to run for new process. # in: bgxgrp - the current group of processes. # out: bgxgrp - new group of processes bgxlimit() { bgxmax=$1 ; shift bgxupdate while [[ ${bgxcount} -ge ${bgxmax} ]] ; do sleep 1 bgxupdate done if [[ "$1" != "-" ]] ; then $* & bgxgrp="${bgxgrp} $!" fi } bgxgrp="process_group_1" for LINE in `cat hosts` do CHECK_SCRIPT='echo $(hostname),$(cat /etc/debian_version)' bgxlimit $JOBS_AT_ONCE ssh ${LINE} "${CHECK_SCRIPT}" done # Wait until all queued processes are done. bgxupdate while [[ ${bgxcount} -ne 0 ]] ; do oldcount=${bgxcount} while [[ ${oldcount} -eq ${bgxcount} ]] ; do sleep 1 bgxupdate done done </code> In this script the primary changes are defining the max number of simultaneous jobs, as well as doing somewhat useful work in returning the hostname and the debian version. ...

April 14, 2013 · 2 min · jsuttonlogin

Turboprop

As an extension of my previous post on parallel execution I present turboprop. The initial version of this script will perform an optimization of a mysql database with multiple tables running at the same time. In the future it may be extended to allow for more operations from the command line such as mysql dumps. <code>#!/bin/bash # turboprop # how many jobs to run at one time JOBS_AT_ONCE=20 # Command to run in parallel in this case mysqlcheck -o COMMAND="mysqlcheck -o" # Print usage information and exit print_usage(){ echo -e "\n" \ "usage: ./turboprop -d databasename \n" \ "Optimizes mysql tables in parallel\n" \ "-d <databasename> Database to optimize\n" \ "-h this help\n" \ "\n" && exit 1 } ### # Options ### # Loop through $@ to find flags while getopts ":d:" FLAG; do case "${FLAG}" in d) # Database name DB=${OPTARG} ;; h) # Print usage print_usage;; [:?]) print_usage;; esac done [ ! ${DB} ] && print_usage ### # Functions ### # The bgxupdate and bgxlimit functions below allow for # running X jobs in parallel in bash. They are taken from: # http://stackoverflow.com/questions/1537956/bash-limit-the-number-of-concurrent-jobs/1685440#1685440 # bgxupdate - update active processes in a group. # Works by transferring each process to new group # if it is still active. # in: bgxgrp - current group of processes. # out: bgxgrp - new group of processes. # out: bgxcount - number of processes in new group. bgxupdate() { bgxoldgrp=${bgxgrp} bgxgrp="" ((bgxcount = 0)) bgxjobs=" $(jobs -pr | tr '\n' ' ')" for bgxpid in ${bgxoldgrp} ; do echo "${bgxjobs}" | grep " ${bgxpid} " >/dev/null 2>&1 if [[ $? -eq 0 ]] ; then bgxgrp="${bgxgrp} ${bgxpid}" ((bgxcount = bgxcount + 1)) fi done } # bgxlimit - start a sub-process with a limit. # Loops, calling bgxupdate until there is a free # slot to run another sub-process. Then runs it # an updates the process group. # in: $1 - the limit on processes. # in: $2+ - the command to run for new process. # in: bgxgrp - the current group of processes. # out: bgxgrp - new group of processes bgxlimit() { bgxmax=$1 ; shift bgxupdate while [[ ${bgxcount} -ge ${bgxmax} ]] ; do sleep 1 bgxupdate done if [[ "$1" != "-" ]] ; then $* & bgxgrp="${bgxgrp} $!" fi } ### # Program Execution ### bgxgrp="process_group_1" for TABLE in `mysql ${DB} -e 'show tables'` do bgxlimit ${JOBS_AT_ONCE} ${COMMAND} ${TABLE} done # Wait until all queued processes are done. bgxupdate while [[ ${bgxcount} -ne 0 ]] ; do oldcount=${bgxcount} while [[ ${oldcount} -eq ${bgxcount} ]] ; do sleep 1 bgxupdate done done </code>

April 7, 2013 · 3 min · jsuttonlogin